A Dockerfile is a text file that contains Docker directives. When a Docker build is executed, the Docker will execute the instructions in the Dockerfile to automatically create the image.
usage
The instructions inside the Dockerfile can access the context files.
Context is recursive, PATH contains all subdirectories, and URL contains all submodules.
For example, think of the current directory as the context,
$ docker build .Sending build Context to Docker Daemon 6.51 MB...Copy the code
Builds are run by Docker daemons, not the CLI.
Build sends the entire context to the daemon. So it’s best to set the context to an empty directory and put the Dockerfile in it. Add only required files. To improve build performance, you can also add.dockerignore to exclude some files and directories.
Warning! Do not use the system root/as PATH, otherwise you will pass everything in the root directory to the Docker Daemon.
Dockerfile is usually placed in the context root directory, or you can use -f to specify other paths,
$ docker build -f /path/to/a/Dockerfile .
Copy the code
-t can be used to specify the image repository.
$ docker build -t shykes/myapp .
Copy the code
Supports multiple,
$Docker build -t shykes/myapp:1.0.2 -t shykes/myapp:latest.
Copy the code
The Docker Daemon does a check before executing the Dockfile command. If there is a syntax error, an error will be reported.
$ docker build -t test/myapp .
Sending build context to Docker daemon 2.048 kB
Error response from daemon: Unknown instruction: RUNCMD
Copy the code
Docker Daemons execute instructions one by one and commit them one by one. After the command is executed, the image ID is generated. Automatically clear the context.
RUN CD/TMP is invalid because the daemon executes each instruction independently and does not affect subsequent instructions.
In order to speed up the build process, Docker reuses the intermediate image (cache).
$ docker build -t svendowideit/ambassador .Sending build context to Docker Daemon 15.36 kB Step 1/4: FROM alpine:3.2 ---> 31f630c65071
Step 2/4 : MAINTAINER [email protected]
---> Using cache
---> 2a1c91448f5f
Step 3/4 : RUN apk update && apk add socat && rm -r /var/cache/
---> Using cache
---> 21ed6e7fbb73
Step 4/4 : CMD env | grep _TCP= | (sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat -t 100000000 TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/' && echo wait) | sh
---> Using cache
---> 7ea8aef582cc
Successfully built 7ea8aef582cc
Copy the code
The cache is from an image that has been locally built or loaded using docker load.
If you want to specify a mirror as the cache directly, use –cache-from.
format
# Comment
INSTRUCTION arguments
Copy the code
# Starts with a comment or parser directive (which prompts the parser to do special processing).
Instructions are case insensitive, but to distinguish them from arguments, they are usually all uppercase.
Dockerfile executes instructions FROM top to bottom. The first instruction must be FROM, defining the parent image of the build. An image without parent is called a base image.
The # in the argument is not a comment, it’s part of the argument,
# Comment
RUN echo 'we are running some # of cool things'
Copy the code
Comments are removed before the Dockerfile directive is executed. The following is equivalent,
RUN echo hello \
# comment
world
Copy the code
RUN echo hello \
world
Copy the code
Note that comments do not support the newline character \.
Spaces before comments and instructions are ignored. The following is equivalent:
# this is a comment-line
RUN echo hello
RUN echo world
Copy the code
# this is a comment-line
RUN echo hello
RUN echo world
Copy the code
But the whitespace in the argument is going to be preserved,
RUN echo "\ hello\ world"
Copy the code
Parser directives
# directive=value
Copy the code
Parser directives are a special annotation to prompt the Parser for a special treatment.
But the Parser cache does not add layers to the build and is not recognized as a Build step.
If comments, blank lines, or directives are run, Docker will no longer recognize the Parser cache, so the Parser cache must be placed first in the Dockerfile.
Parser directives are case-insensitive, but are generally agreed to be all lowercase. It is also agreed to be followed by a blank line.
Parser directives do not support newline characters.
Here are some examples of inefficiencies,
Invalid – newline character
# direc \
tive=value
Copy the code
Invalid — twice
# directive=value1
# directive=value2
FROM ImageName
Copy the code
Invalid – a normal comment followed by an instruction
FROM ImageName
# directive=value
Copy the code
Invalid – becomes normal comment after normal comment
# About my dockerfile
# directive=value
FROM ImageName
Copy the code
Invalid – Unknown commands are treated as normal comments, and normal comments are followed by normal comments
# unknowndirective=value
# knowndirective=value
Copy the code
The space in the same line of the Parser directives will be ignored, the following is equivalent,
#directive=value
# directive =value
# directive= value
# directive = value
# dIrEcTiVe=value
Copy the code
Currently supports 2 Parser directives,
syntax
, rely on BuildKitescape
escape
Backslash (default)
# escape=\
Copy the code
Or back quotes
# escape=`
Copy the code
Used to specify an escape character. This is useful on Windows because \ is the path separator on Windows.
For instance,
FROM microsoft/nanoserver
COPY testfile.txt c:\\
RUN dir c:\
Copy the code
Will fail to execute,
PS C:\John> docker build -t CMD. Sending build context to docker daemon 3.072 kB Step 1/2: FROM microsoft/nanoserver ---> 22738ff49c6d Step 2/2 : COPY testfile.txt c:\RUN dir c: GetFileAttributesEx c:RUN: The system cannot find the file specified. PS C:\John>Copy the code
Use escape to replace \ with ‘
# escape=`
FROM microsoft/nanoserver
COPY testfile.txt c:\
RUN dir c:\
Copy the code
Successful execution,
PS C:\John> docker build -t succeeds --no-cache=true. Sending build context to docker daemon 3.072 kB Step 1/3: FROM microsoft/nanoserver ---> 22738ff49c6d
Step 2/3 : COPY testfile.txt c:\
---> 96655de338de
Removing intermediate container 4db9acbb1682
Step 3/3 : RUN dir c:\
---> Running in a2c157f842f5
Volume in drive C has no label.
Volume Serial Number is 7E6D-E0F7
Directory of c:\
10/05/2016 05:04 PM 1,894 License.txt
10/05/2016 02:22 PM <DIR> Program Files
10/05/2016 02:14 PM <DIR> Program Files (x86)
10/28/2016 11:18 AM 62 testfile.txt
10/28/2016 11:20 AM <DIR> Users
10/28/2016 11:20 AM <DIR> Windows
2 File(s) 1,956 bytes
4 Dir(s) 21,259,096,064 bytes free
---> 01c7f3bef04f
Removing intermediate container a2c157f842f5
Successfully built 01c7f3bef04f
PS C:\John>
Copy the code
Environment to replace
Environment variables (defined using the ENV directive) can be used as variables in directives and interpreted by dockerfiles. You can also handle escapes to include variable-like syntax literals in statements.
Use $variable_name or ${variable_name} to refer to environment variables.
You can use double parentheses and underscores, as in ${foo}_bar. Also supports the bash modifier,
${variable:-word}
setvariable
And then the value of set, no setvariable
The value isword
${variable:+word}
setvariable
The value isword
, there is no setvariable
It’s an empty string
Word can be either a string or another environment variable.
A variable can be preceded by an escape character, such as \$foo. \${foo} will be escaped to $foo and ${foo} respectively.
The sample,
FROM busybox
ENV foo /bar
WORKDIR ${foo} # WORKDIR /bar
ADD . $foo # ADD . /bar
COPY \$foo /quux # COPY $foo /quux
Copy the code
The following directives in Dockerfile support environment variables
ADD
COPY
ENV
EXPOSE
FROM
LABEL
STOPSIGNAL
USER
VOLUME
WORKDIR
ONBUILD
(Used in conjunction with the above instructions)
It is important to note that variable substitution is for the whole instruction,
ENV abc=hello
ENV abc=bye def=$abc
ENV ghi=$abc
Copy the code
The value of def is hello, not bye, because the previous instruction assigned hello.
The value of GHI would be BYE.
.dockerignore file
The.dockerignore file is at the root of the context and excludes matched files and directories from the context.
This prevents sending large files or sensitive files and directories to the Docker daemon when using ADD and COPY.
Context is defined by PATH and URL, so.dockerignore will match those two paths.
/foo/bar
== foo/bar
The sample,
# comment
*/temp*
*/*/temp*
temp?
Copy the code
Rule | Behavior |
---|---|
# comment |
Annotation to ignore |
*/temp* |
Exclude root subdirectories,temp Files and directories at the beginning.Such as /somedir/temporary.txt 和 /somedir/temp |
*/*/temp* |
To eliminate the rootOn the second floorDirectory,temp Files and directories at the beginning.Such as /somedir/subdir/temporary.txt |
temp? |
If root is excluded,temp +1 character files and directories.Such as /tempa 和/tempb |
Matches follow the Filepath.Match rule of the Go language.
Docker also supports **, matching any number of directories (including 0). For example, **/*. Go excludes. Go, including all directories under context root.
If you exclude a bunch of files and want to include only a few of them, use the exception rule! .
For example, exclude.md ending files, including readme.md,
*.md ! README.mdCopy the code
Readme-secret.md will not be ruled out because! README*.md can match readme-secret. md and readme-secret. md can match readme-secret. md.
.dockerignore files can even exclude dockerfiles and.dockerignore files, but it doesn’t work, these files are still sent to the Docker daemon, but ADD and COPY don’t COPY them to the image anymore.
FROM
The FROM directive initializes a new buid stage and sets the Parent Image for subsequent directives.
FROM [--platform=<platform>] <image> [AS <name>]
Copy the code
or
FROM [--platform=<platform>] <image>[:<tag>] [AS <name>]
Copy the code
or
FROM [--platform=<platform>] <image>[@<digest>] [AS <name>]
Copy the code
–platform, the platform used to define the image, such as Linux/AMD64, Linux/ARM64, or Windows/AMD64, so that multi-platform mirroring can be supported.
Tag Digest is optional. If you leave it blank, the latest tag is used by default. If the tag is not found, the Builder will report an error.
AS the name can be individual name for the image, in the subsequent FROM and COPY – FROM = < name | index > command can be used in this nickname.
You can use more than one FROM in a Dockerfile file. Each FROM clears the state created by the previous instruction. So before each new FROM instruction, record the last image ID output by the COMMIT.
ARG is the only instruction that can precede FROM.
For example –platform, the default platform for build requests is used by default. You can also use the global build parameter to force the stage to be the local build platform with automatic Platform ARGs (depending on BuildKit) (–platform=$BUILDPLATFORM), It is then used to cross-compile the target platform in stages.
How do you use FROM and ARG together?
The FROM directive supports variables declared by the ARG that appear before the first FROM.
ARG CODE_VERSION=latest
FROM base:${CODE_VERSION}
CMD /code/run-app
FROM extras:${CODE_VERSION}
CMD /code/run-extras
Copy the code
The ARG previously declared by FROM is outside the build stage, so it cannot be used in any directives after FROM. If you do, you can use the ARG directive without value in the build stage.
ARG VERSION=latest
FROM busybox:$VERSION
ARG VERSION
RUN echo $VERSION > image_version
Copy the code
RUN
RUN <command>
(shellFormat, Linux/bin/sh -c
Windowscmd /S /C
)RUN ["executable", "param1", "param2"]
(execFormat)
The RUN command is executed in a new layer on top of the current image, and the commit result is used in the next step of the Dockerfile.
Commits in the RUN directive conform to the Docker philosophy, commit is cheap, containers can be created from any record in the image history, as in source control.
You can use different shells,
Shell format
RUN /bin/bash -c 'source $HOME/.bashrc; echo $HOME'
Copy the code
The exec format
RUN ["/bin/bash"."-c"."echo hello"]
Copy the code
The shell format calls the command shell, while the exec format does not, so the exec format is useless, to use shell RUN [“sh”, “-c”, “echo $HOME”].
Note that the exec format is parsed as a JSON array, so you can only use double quotes. Also notice the backslash,
error
RUN ["c:\windows\system32\tasklist.exe"]
Copy the code
correct
RUN ["c:\\windows\\system32\\tasklist.exe"]
Copy the code
By default, RUN cache will be enabled. For example, RUN apt-get dist- upgrade-y will be reused in the next build. You can disable caching with docker build –no-cache.
RUN caching can also be disabled using the ADD and COPY directives.
CMD
CMD and RUN are different. The RUN command executes command and commit results during build. CMD does not execute any commands at build time. Instead, it defines commands for the image, which are executed when container (the container created by the image) starts.
CMD ["executable","param1","param2"]
(execFormat, preferred)CMD ["param1","param2"]
(ENTRYPOINTDefault parameters)CMD command param1 param2
(shellFormat)
A Dockerfile can have only one CMD directive, and if there are more than one, only the last one takes effect.
The shell format calls the command shell, while the exec format does not, so the exec format is useless, to use shell RUN [“sh”, “-c”, “echo $HOME”].
Note that the exec format is parsed as a JSON array, so you can only use double quotes. Note also the backslash.
If you want the Container to run the same executable every time, you need to use it in conjunction with ENTRYPOINT.
If the Docker run defines parameters, then the CMD definition is overridden.
LABEL
LABEL <key>=<value> <key>=<value> <key>=<value> ...
Copy the code
LABEL is used to add metadata to an image in the form of key-value pairs.
The sample,
LABEL "com.example.vendor"="ACME Incorporated"
LABEL com.example.label-with-value="foo"
LABEL version="1.0"
LABEL description="This text illustrates \
that label-values can span multiple lines."
Copy the code
An image can have multiple labels, and a label can have multiple key-value pairs. The following is equivalent:
LABEL multi.label1="value1" multi.label2="value2" other="value3"
Copy the code
LABEL multi.label1="value1" \
multi.label2="value2" \
other="value3"
Copy the code
The label is inherited along with the image, from the base image or parent Image to the current image.
Duplicate labels overwrite old ones with the latest ones.
You can use the command to view the labels of image,
docker image inspect --format='' myimage
Copy the code
{
"com.example.vendor": "ACME Incorporated"."com.example.label-with-value": "foo"."version": "1.0"."description": "This text illustrates that label-values can span multiple lines."."multi.label1": "value1"."multi.label2": "value2"."other": "value3"
}
Copy the code
MAINTAINER
MAINTAINER is deprecated and uses LABLE directly,
LABEL maintainer="[email protected]"
Copy the code
EXPOSE
EXPOSE <port> [<port>/<protocol>...]
Copy the code
EXPOSE defines the network port that Container listens on. It supports TCP and UDP. The default port is TCP.
EXPOSE is not really a release port, but rather a pre-defined one.
The actual publishing is done at docker run, using either -p or -p.
-p publishes one or more ports. -p publishes all ports and maps them to higher-level ports.
Example, default TCP, can define UDP,
EXPOSE 80/udp
Copy the code
You can also define both TCP and UDP,
EXPOSE 80/tcp
EXPOSE 80/udp
Copy the code
If -p is used in docker run, one TCP port and one UDP port will be exposed, and their ports will be different because they will be mapped to higher-order ports.
Use -p to specify the port,
docker run -p 80:80/tcp -p 80:80/udp ...
Copy the code
Docker networks can also be used to create networks to communicate between Containers without exposing any ports. Because Containers can use any port to communicate.
ENV
ENV <key> <value>
ENV <key>=<value> ...
Copy the code
ENV is used to set environment variables. There are two forms, the following are equivalent,
ENV myName="John Doe" myDog=Rex\ The\ Dog \
myCat=fluffy
Copy the code
ENV myName John Doe
ENV myDog Rex The Dog
ENV myCat fluffy
Copy the code
You can use Docker Inspect to view environment variables. Docker run –env
=
can also be used to modify environment variables.
ENV’s scope includes build and Container Running. For example, ENV DEBIAN_FRONTEND noninteractive. All operations are non-interactive. You do not need to ask the user for input and run the command directly. Apt-get users may be mistaken for a Debian-based image. The correct approach is to add a separate environment variable to command, such as RUN apt-get install -y python3.
ADD
ADD [--chown=<user>:<group>] <src>... <dest>
ADD [--chown=<user>:<group>] ["<src>"."<dest>"]
Copy the code
There are two forms of ADD, the second of which is to allow paths to contain Spaces, so it is quoted.
— Chown applies only to Linux Containers, not Windows.
ADD copies the new file, directory, or remote file URLs from < SRC > and adds them to the image file system where
resides.
SRC is the relative path, relative to the build context, if it is a file or directory. It also supports wildcards and follows Golang’s Filepath.Match rule.
For example, add all files starting with “hom”,
ADD hom* /mydir/
Copy the code
To use? Matches a single character,
ADDhom? .txt /mydir/
Copy the code
is an absolute path, or a relative path to WORKDIR.
Example, absolute path,
ADD test.txt /absoluteDir/
Copy the code
/relativeDir/,
ADD test.txt relativeDir/
Copy the code
If the path type contains special characters (such as [and]), it needs to be escaped,
For example, add a file arr[0].txt,
ADD arr[[]0].txt /mydir/
Copy the code
For Linux, you can use –chown to define username, groupname, or UID/GID. By default, new files and directories are set to UID 0 and GID 0.
If only username is set without groupname, or only UID is set without GID, GID uses the same value as UID.
The username and groupname are converted into UIds and gids by the container’s root filesystem /etc/passwd and /etc/group. If the Container does not have the two files, an error message is displayed after username/groupname is set. This can be avoided by setting the UID/GID.
The sample,
ADD --chown=55:mygroup files* /somedir/
ADD --chown=bin files* /somedir/
ADD --chown=1 files* /somedir/
ADD --chown=10:11 files* /somedir/
Copy the code
If build uses STDIN (docker build – < somefile), there is no build context and only ADDURL. You can also add a package (docker build – < archive.tar.gz) when using STDIN. The Dockerfile and other packages in the package root directory are used as build context.
If SRC is a remote file URL, 600 permissions are required (Linux). If the remote file has an HTTP Last-Modified header, the timestamp of the header is used to set the Mtime to the dest file. However, Mtime does not reflect whether the file is modified or whether the cache should be updated.
If the URL file needs authorization, ADD does not support it. Use RUN wget, RUN curl, or another tool in the Container.
ADD follows the following rules:
<src>
Must be in buildcontext; Can’tADD .. /something /something
Add context parent directory stuff. becausedocker build
The first step is to send the context, directory and subdirectories to the Docker Daemon.- if
<src>
Is the URL,<dest>
Without a trailing slash, the file is downloaded directly from the URL and then copied directly to<dest>
. - if
<src>
Is the URL,<dest>
It ends with a slash, so the file name will be resolved from the URL and downloaded to<dest>/<filename>
. For instance,ADD http://example.com/foobar dest/
It creates a filedest/foobar
. The URL must be an unambiguous path to ensure that the appropriate filename is found (http://example.com
Is invalid). - if
<src>
If yes, the entire directory is copied, including the file system metadata. (The directory itself is not copied, just the content) - if
<src>
Is a local zip package (such as gzip, bzip2 or xz), which will be decompressed into a directory. The remote URL isDon’tDecompression. Decompression is equivalent to executiontar -x
If there is a file conflict in the dest path, it will be renamed to 2. Compressed packages are judged not by filename, but by content, such as an empty file named.tar.gz
, will not be unzipped and copied. - if
<src>
Any other file is copied along with its metadata. At this time<dest>
To slash/
At the end, it’s considered a table of contents,<src>
Will be written<dest>/base(<src>)
. - if
<src>
Defines multiple resources, whether directly or wildcard matched,<dest>
Must be a directory and is preceded by a slash/
At the end. - if
<dest>
It doesn’t end with a slash, so it’s considered a normal file, so<src>
Will be written to<dest>
. - if
<dest>
If not, all uncreated directories in path will be created automatically.
If the SRC content changes, the cache for all subsequent directives is disabled after the first encounter with the ADD directive, including the cache for the RUN directive.
COPY
The difference between COPY and ADD is that ADD adds remote URLS, COPY does not.
COPY [--chown=<user>:<group>] <src>... <dest>
COPY [--chown=<user>:<group>] ["<src>"."<dest>"]
Copy the code
COPY has two forms, the second of which is enclosed in double quotes to support the path containing Spaces.
— Chown applies only to Linux Containers, not Windows.
COPY copies new files and directories from < SRC > and adds them to the image file system where
is located.
SRC is the relative path, relative to the build context, if it is a file or directory. It also supports wildcards and follows Golang’s Filepath.Match rule.
For example, add all files starting with “hom”,
COPY hom* /mydir/
Copy the code
To use? Matches a single character,
COPYhom? .txt /mydir/
Copy the code
is an absolute path, or a relative path to WORKDIR.
Example, absolute path,
COPY test.txt /absoluteDir/
Copy the code
/relativeDir/,
COPY test.txt relativeDir/
Copy the code
If the path type contains special characters (such as [and]), it needs to be escaped,
For example, add a file arr[0].txt,
COPY arr[[]0].txt /mydir/
Copy the code
For Linux, you can use –chown to define username, groupname, or UID/GID. By default, new files and directories are set to UID 0 and GID 0.
If only username is set without groupname, or only UID is set without GID, GID uses the same value as UID.
The username and groupname are converted into UIds and gids by the container’s root filesystem /etc/passwd and /etc/group. If the Container does not have the two files, an error message is displayed after username/groupname is set. This can be avoided by setting the UID/GID.
The sample,
COPY --chown=55:mygroup files* /somedir/
COPY --chown=bin files* /somedir/
COPY --chown=1 files* /somedir/
COPY --chown=10:11 files* /somedir/
Copy the code
If build uses STDIN (docker build – < somefile), there is no build context and COPY cannot be used.
COPY support – from = < name | index >, is used to specify the SRC to buid before image (through the from.. AS
created) to replace build Context. The value can be either a name or an index number (all build stages created using the FROM directive). If the build stage cannot be found by name, the image with the same name will be found.
COPY follows the following rules:
<src>
Must be in buildcontext; Can’tCOPY .. /something /something
Add context parent directory stuff. becausedocker build
The first step is to send the context, directory and subdirectories to the Docker Daemon.- if
<src>
If yes, the entire directory is copied, including the file system metadata. (The directory itself is not copied, just the content) - if
<src>
Any other file is copied along with its metadata. At this time<dest>
To slash/
At the end, it’s considered a table of contents,<src>
Will be written<dest>/base(<src>)
. - if
<src>
Defines multiple resources, whether directly or wildcard matched,<dest>
Must be a directory and is preceded by a slash/
At the end. - if
<dest>
It doesn’t end with a slash, so it’s considered a normal file, so<src>
Will be written to<dest>
. - if
<dest>
If not, all uncreated directories in path will be created automatically.
If the SRC content changes, the cache of all subsequent instructions is disabled after the first COPY instruction is encountered, including the cache of the RUN instruction.
ENTRYPOINT
The exec format
ENTRYPOINT ["executable"."param1"."param2"]
Copy the code
Shell format
ENTRYPOINT command param1 param2
Copy the code
ENTRYPOINT is used to configure Container to run as an executable file.
Example, start nginx with the default, listen on port 80,
$ docker run -i -t --rm -p 80:80 nginx
Copy the code
The command line arguments of the Docker run are appended to all elements in the exec format and overwrite the elements defined by the CMD directive. Docker run -d will pass -d to the entry point. You can override the entrypoint directive with docker run — EntryPoint (but only set binary to exec, not sh-c).
The shell format disables CMD or run command-line arguments, but the downside is that ENTRYPOINT is not launched as a subcommand of /bin/sh-c, so it cannot be passed to signals. This means that the executable, which is not the PID 1 of the Container, will not receive Unix signals (a software interrupt). This way the executable will not receive SIGTERM from docker stop
.
Only the last ENTRYPOINT of a Dockerfile is valid.
ENTRYPOINT Exec sample
FROM ubuntu
ENTRYPOINT ["top"."-b"]
CMD ["-c"]
Copy the code
When running container, top is the only process.
$ docker run -it --rm --name test top -HTop-08:25:00 UP 7:27, 0 Users, Load Average: 0.00, 0.01, 0.05 Threads: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie%Cpu(S): 0.1US, 0.1SY, 0.0Ni, 99.7ID, 0.0wa, 0.0hi, 0.0Si, 0.0STKiB Mem: 2056668 total, 1616832 used, 439836 free, 99352 buffers KiB Swap: 1441840 total, 0 used, 1441840 free. 1324440 cached Mem PID USER PR NI VIRT RES SHR S %CPU % Mem TIME+ COMMAND 1 root 20 0 19744 2336 2080 R 0.0 0.1 0:00. 04 topCopy the code
To verify more results, use Docker Exec,
$ docker exec -it test ps auxUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 2.6 0.1 19752 2352? Ss+ 08:24 0:00 top-b-h root 7 0.0 0.1 15572 2164? R+ 08:25 0:00 ps auxCopy the code
Top -b -h, where top -b is set by ENTRYPOINT, -h is the docker command line parameter, added to ENTRYPOINT, overwriting CMD -c.
You can then gracefully request top shut down using docker Stop test.
For example, using ENTRYPOINT to run Apache in the foreground (that is, PID 1),
FROM debian:stable
RUN apt-get update && apt-get install -y --force-yes apache2
EXPOSE 80 443
VOLUME ["/var/www"."/var/log/apache2"."/etc/apache2"]
ENTRYPOINT ["/usr/sbin/apache2ctl"."-D"."FOREGROUND"]
Copy the code
If you want to write a startup script for a single executable, use the exec and gosu commands to ensure that the executable receives Unix signals.
#! /usr/bin/env bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$@"
fi
exec "$@"
Copy the code
Finally, if you need to do some extra cleaning (or interacting with other containers) at shutdown time, or multiple coordination instead of a single executable, you might want to make sure that the ENTRYPOINT script can receive Unix Signals, pass them, and do more,
#! /bin/sh
# Note: I've written this using sh so it works in the busybox container too
# USE the trap if you need to also do manual cleanup after the service is stopped,
# or need to start multiple services in the one container
trap "echo TRAPed signal" HUP INT QUIT TERM
# start service in background here
/usr/sbin/apachectl start
echo "[hit enter key to exit] or run 'docker stop <container>'"
read
# stop service and clean up here
echo "stopping apache"
/usr/sbin/apachectl stop
echo "exited $0"
Copy the code
Docker run –rm -p 80:80 –name test apache Then stop Apache using the script,
$ docker exec -it test ps auxUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.1 0.0 4448 692? Ss+ 00:42 0:00 /bin/sh /run.sh 123 CMD cmd2 root 19 0.0 0.2 71304 4440? Ss 00:42 0:00 /usr/sbin/apache2 -k start WWW -data 20 0.2 0.2 360468 6004? Sl 00:42 0:00 /usr/sbin/apache2 -k start WWW -data 21 0.2 0.2 360468 6000? Sl 00:42 0:00 /usr/sbin/apache2 -k start root 81 0.0 0.1 15572 2140? R+ 00:44 0:00 ps aux
$ docker top test
PID USER COMMAND
10035 root {run.sh} /bin/sh /run.sh 123 cmd cmd2
10054 root /usr/sbin/apache2 -k start
10055 33 /usr/sbin/apache2 -k start
10056 33 /usr/sbin/apache2 -k start
$ /usr/bin/time docker stop testTest real 0m 0.27s user 0m 0.03s sys 0m 0.03sCopy the code
The shell format calls the command shell, while the exec format does not, so the exec format is useless, to use shell RUN [“sh”, “-c”, “echo $HOME”].
Note that the exec format is parsed as a JSON array, so you can only use double quotes. Note also the backslash.
ENTRYPOINT Shell sample
ENTRYPOINT defines a simple string, which is then executed in /bin/sh-c. Shell formats use shell processing instead of shell Environment variables, and then ignore any CMD or Docker Run command-line arguments. To ensure that Docker Stop can signal directly any running ENTRYPOINT executable, remember to start with exec,
FROM ubuntu
ENTRYPOINT exec top -b
Copy the code
When you run the image, you’ll see a single PID 1 process,
$ docker run -it --rm --name test topMem: 1704520K used, 352148K free, 0K shrd, 0K buff, 140368121167873K cached CPU: 5% usr 0% sys 0% nic 94% idle 0% io 0% irq 0% sirq Load average: 0.08 0.03 0.05 2/98 6 PID PPID USER STAT VSZ %VSZ %CPU COMMAND 1 0 root R 3164 0% 0% top-bCopy the code
Docker stop will exit cleanly,
$ /usr/bin/time docker stop testTest real 0m 0.20s user 0m 0.0s sys 0m 0.04sCopy the code
If you forget to add exec before ENTRYPOINT,
FROM ubuntu
ENTRYPOINT top -b
CMD --ignored-param1
Copy the code
Run (set a name for the next step),
$ docker run -it --name test top --ignored-param2Mem: 1704184K used, 352484K free, 0K shrd, 0K buff, 140621524238337K cached CPU: 9% usr 2% sys 0% nic 88% idle 0% io 0% irq 0% sirq Load average: 0.01 0.02 0.05 2/101 7 PID PPID USER STAT VSZ %VSZ %CPU COMMAND 10 root S 3168 0% 0% /bin/sh -c top -b CMD cmd2 7 1 root R 3164 0% 0% top -bCopy the code
You will see that ENTRYPOINT does not define TOP as PID 1.
If the Docker stop test is executed, the Container will not exit cleanly. The stop command is forced to send a SIGKILL after a timeout,
$ docker exec -it test ps aux
PID USER COMMAND
1 root /bin/sh -c top -b cmd cmd2
7 root top -b
8 root ps aux
$ /usr/bin/time docker stop testTest real 0m 10.19s user 0m 0.04s sys 0m 0.03sCopy the code
Real 10.19s timed out.
How are CMD and ENTRYPOINT used together
The CMD and ENTRYPOINT directives define which commands are executed when a Container is run. There are some rules to their union,
- Dockerfile should define at least one
CMD
或ENTRYPOINT
。 - If container is used as an executable, it should be defined
ENTRYPOINT
。 - If you need to give
ENTRYPOINT
Defining default parameters, or executing ad-hoc (temporary) commands in containers, should be usedCMD
. - Overwrites when the Container is run with optional parameters
CMD
。
The following table shows the different combinations of CMD and ENTRYPOINT directives
No ENTRYPOINT | ENTRYPOINT exec_entry p1_entry | ENTRYPOINT [” exec_entry “, “p1_entry”] | |
---|---|---|---|
No CMD | error, not allowed | /bin/sh -c exec_entry p1_entry | exec_entry p1_entry |
CMD [” exec_cmd “, “p1_cmd”] | exec_cmd p1_cmd | /bin/sh -c exec_entry p1_entry | exec_entry p1_entry exec_cmd p1_cmd |
CMD [” p1_cmd “, “p2_cmd”] | p1_cmd p2_cmd | /bin/sh -c exec_entry p1_entry | exec_entry p1_entry p1_cmd p2_cmd |
CMD exec_cmd p1_cmd | /bin/sh -c exec_cmd p1_cmd | /bin/sh -c exec_entry p1_entry | exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd |
Note that setting ENTRYPOINT resets CMD to null if CMD is defined from the Base Image. If you want to use CMD at this point, you must redefine the current image.
VOLUME
VOLUME ["/data"]
Copy the code
The VOLUME command is used to create mount points for mounting containers to native hosts or other containers.
The value can be a JSON array, such as VOLUME [“/var/log/”], or a string, such as VOLUME /var/log or VOLUME /var/log/var/db.
The docker run command initializes the newly created volumn with any data that exists in the location defined in the base image.
The sample,
FROM ubuntu
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
Copy the code
Docker Run creates a mount point at /myvol and copies the greeting to the newly created volumn.
Follow the rules,
- ** Windows-based containers **: The target path of volumn must be one of the following:
- A nonexistent or empty directory
- In addition to
C:
Extrinsic drive
- Modify volumn in Dockerfile: Any build steps that attempt to modify volumn data after volumn has been declared will be ignored.
- Use double quotes, not single quotes.
- The host directory (mount point) is declared only in the container run-time: Mount points are host-dependent. Because the host directory cannot be guaranteed to be useful to all hosts, to ensure image portability, you cannot mount the host directory in a Dockerfile, but must do so when creating or running a Container.
VOLUME
Directives are not supported eitherhost-dir
Parameters like this.
USER
USER <user>[:<group>]
Copy the code
or
USER <UID>[:<GID>]
Copy the code
The USER directive is used to specify the USER name/group when RUN, CMD, and ENTRYPOINT directives are executed. The USER directive can set the USER name (or UID), optionally the USER group (or GID).
If the user group is defined, then the user only has the membership of the group, and any other configured group memberships will be ignored.
If the user does not have a primary group, the image (or the next instruction) will be run as root group.
On Windows, if it is not an internal account number, it must be created first. Net user can be called from Dockerfile,
FROM microsoft/windowsservercore
# Create Windows user in the container
RUN net user /add patrick
# Set it for subsequent commands
USER patrick
Copy the code
WORKDIR
WORKDIR /path/to/workdir
Copy the code
WORKDIR Sets the working directory for the RUN, CMD, ENTRYPOINT, COPY and ADD directives.
If WORKDIR does not exist, it will still be created, even if the later Dockerfile will not be used.
The WORKDIR directive can be defined multiple times in a Dockerfile. If it is a relative path, it is the path relative to the previous WORKDIR directive.
The sample,
WORKDIR /a
WORKDIR b
WORKDIR c
RUN pwd
Copy the code
The PWD result is /a/b/ C.
WORKDIR can reference environment variables defined by ENV, for example,
ENV DIRPATH /path
WORKDIR $DIRPATH/$DIRNAME
RUN pwd
Copy the code
The result of PWD is /path/$DIRNAME.
ARG
ARG <name>[=<default value>]
Copy the code
The ARG directive defines variables that can be passed to the Builder at build-time using the docker build command with the argument –build-arg
=
. If the user specified a build parameter that was not defined in the Dockerfile, build will report warning,
[Warning] One or more build-args [foo] were not consumed.
Copy the code
A Dockerfile can contain one or more ARG directives.
The sample,
FROM busybox
ARG user1
ARG buildno
#...
Copy the code
Warning! It is not recommended to use build-time variables to pass private data such as Github keys, user authentication information, etc. Because any user of image can view the build-time variable using Docker History.
The default value
The ARG directive can be set to default values (optional),
FROM busybox
ARG user1=someuser
ARG buildno=1
#...
Copy the code
If the ARG directive has a default value and no value is passed at build-time, the Builder will use that default value.
The scope of
An ARG directive takes effect on the line on which it is defined, not when the command line is used, or elsewhere.
The sample,
FROM busybox
USERThe ${user:-some_user}
ARG user
USER $user
#...
Copy the code
The user builds this file, calls,
$ docker build --build-arg user=what_user .
Copy the code
The USER on line 2 turns out to be some_user because the USER variable is defined on line 3.
The USER on line 4 turns out to be what_user, because the USER variable is already defined and the WHAT_user value is passed on the command line.
Before the ARG directive is defined, any variable used results in an empty string.
At the end of the build stage defined by the ARG, the ARG directive is out of scope. To use the same ARG in multiple stages, each stage must include the ARG instruction.
FROM busybox
ARG SETTINGS
RUN ./run/setup $SETTINGS
FROM busybox
ARG SETTINGS
RUN ./run/other $SETTINGS
Copy the code
Use the ARG variable
You can use the ARG or ENV directives to define variables for the RUN directive. Environment variables defined by ENV always override variables defined by ARG with the same name.
The sample,
FROM ubuntu
ARG CONT_IMG_VER
ENV CONT_IMG_VER v1.0.0
RUN echo $CONT_IMG_VER
Copy the code
Assuming you use this command build image,
$Docker build --build-arg CONT_IMG_VER=v2.0.1
Copy the code
RUN will use v1.0.0 instead of the V2.0.1 passed by ARG. This behavior is somewhat similar to that of a shell script, where a local variable overrides variables passed as arguments or inherited from the environment definition.
Again, defining different envs makes it more useful to combine ARG with ENV,
FROM ubuntu
ARG CONT_IMG_VER
ENV CONT_IMG_VER ${CONT_IMG_VER:-v1.0.0}
RUN echo $CONT_IMG_VER
Copy the code
Unlike ARG, ENV values are persisted in build image. If not –build-arg build,
$ docker build .
Copy the code
With the Dockerfile, CONT_IMG_VER will still persist in the image, which has a value of v1.0.0 because ENV defines the default value in line 3.
In this example, the ENV directive allows you to pass in command-line arguments and persist them to the final image, enabling variable extension. Variable extensions support only part of the Dockerfile directive.
ADD
COPY
ENV
EXPOSE
FROM
LABEL
STOPSIGNAL
USER
VOLUME
WORKDIR
ONBUILD
(Used in conjunction with the above instructions)
Predefined ARGs
Docker has some predefined ARG variables that you can use without using ARG directives.
HTTP_PROXY
http_proxy
HTTPS_PROXY
https_proxy
FTP_PROXY
ftp_proxy
NO_PROXY
no_proxy
Directly at the command line,
--build-arg <varname>=<value>
Copy the code
By default, these predefined variables are not printed into docker History. This reduces the risk of accidentally revealing sensitive authentication information in the HTTP_PROXY variable.
For example, build Dockerfile using –build-arg HTTP_PROXY=http://user:[email protected],
FROM ubuntu
RUN echo "Hello World"
Copy the code
The HTTP_PROXY variable is not printed to docker history and is not cached. If the proxy server becomes http://user:[email protected], subsequent builds will not cause a cache miss.
Args can be used to override this default behavior,
FROM ubuntu
ARG HTTP_PROXY
RUN echo "Hello World"
Copy the code
When the Dockerfile is built, HTTP_PROXY is stored in the Docker history. If its value changes, build caching is disabled.
Impact on cache
ARG variables are not persisted to image like ENV, but affect build cache in a similar way. If a Dockerfile defines an ARG variable that is different from the previous build, a “cache miss” will occur the first time the variable is used (not when it is defined). In particular, all RUN instructions following arGs generally use ARG variables, which can result in a cache miss. But all predefined ARGs do not affect the cache, unless there is an ARG directive with the same name in a Dockerfile.
Example, 2 dockerfiles
FROM ubuntu
ARG CONT_IMG_VER
RUN echo $CONT_IMG_VER
Copy the code
FROM ubuntu
ARG CONT_IMG_VER
RUN echo hello
Copy the code
If you specify –build-arg CONT_IMG_VER=
on the command line, neither of the above examples will cache miss on line 2 and cache miss on line 3. ARG CONT_IMG_VER causes the RUN line to be considered as executing CONT_IMG_VER=
echo hello, so if
changes, cache miss.
Another example,
FROM ubuntu
ARG CONT_IMG_VER
ENV CONT_IMG_VER $CONT_IMG_VER
RUN echo $CONT_IMG_VER
Copy the code
A cache miss occurs in line 3. The ARG variable referenced by ENV is changed on the command line. Also, in this example, ENV causes the image to contain the value (ENV is persisted to the image).
If ENV and ARG directives are repeated,
FROM ubuntu
ARG CONT_IMG_VER
ENV CONT_IMG_VER hello
RUN echo $CONT_IMG_VER
Copy the code
Line 3 does not cache miss because the value of CONT_IMG_VER is constant (hello). So the environment variables and values used by the RUN command in line 4 do not change between builds.
ONBUILD
ONBUILD <INSTRUCTION>
Copy the code
The ONBUILD directive adds a trigger to the image, which is triggered when the image is the base. Trigger is executed in the downstream build context, just as it is embedded in the downstream Dockerfile immediately after the FROM directive.
Any build directive can be registered as a trigger.
If you build an image, that image will be used as a base to build other images, which is useful. For example, an application build environment or a Deamon custom configuration.
For example, if an image is a reusable Python application Builder (used to build a new application image), it needs to add the application source code to a specific directory and call the Build script. The ADD and RUN directives do not have access to the application source code, and the source code for each application build may be different. You can simply provide app developers with sample DockerFiles to copy and paste into their apps, but this is inefficient, error-prone, and difficult to update because of the confusion with the “app definition” code.
You can use the ONBUILD directive to pre-register the directive and run it at the next build stage.
The process is as follows,
- When encounter
ONBUILD
Command, the Builder adds trigger to the metadata of the image being built. This directive does not affect the current build. - At the end of the build, all triggers are stored in the image’s manifest, in the key
OnBuild
The following. You can usedocker inspect
Command to view. - The image may then be used as the base for the new build
FROM
The instructions.FROM
The downstream Builder looks it up as the instruction is processedONBUILD
Triggers, and then in the order in which they were registered. If any trigger fails,FROM
The command breaks and the build fails. If Triggers is succeeding, thenFROM
It will complete and build successfully. - Triggers clears from the last image after execution. That is, they don’t inherit from “parent” builds.
You might add something like,
ONBUILD ADD . /app/src
ONBUILD RUN /usr/local/bin/python-build --dir /app/src
Copy the code
Note that 1. Chained ONBUILD ONBUILD is not allowed. 2.ONBUILD may not trigger FROM or MAINTAINER commands.
STOPSIGNAL
STOPSIGNAL signal
Copy the code
The STOPSIGNAL command sets the system Call signal and sends it to the container to exit. A signal can be a valid unsigned number (matching the position in the kernel’s Syscall table, such as 9) or a SIGNAME (such as SIGKILL).
HEALTHCHECK
2 formats,
HEALTHCHECK [OPTIONS] CMD command
(Check the container by running commands inside the container.)HEALTHCHECK NONE
(Disable health check, inherit from Base Image)
The HEALTHCHECK directive tells Docker how to test if the Container is still working. For example, even though the server is always running, it is actually in an infinite loop and cannot process new connections.
When the Container defines a health check, the health status is added to Status. Status Indicates starting. No matter when you pass the health check, it will become healthy (whatever state it was before). After a certain number of consecutive failures, it would become unhealthy.
The first OPTION can be,
--interval=DURATION
(default:30s
)--timeout=DURATION
(default:30s
)--start-period=DURATION
(default:0s
)--retries=N
(default:3
)
At interval seconds after the Container starts, a health check is performed. After each health check is complete, wait for Interval seconds to run again.
If the health check runs after timeout seconds, it is considered to have failed.
If the number of failures exceeds the retries value, the equals are unhealthy.
Start period Specifies the start time of the Container. Probe failures during this period are not counted as retry times. However, if the health check passes during this period, the Container considers it to have been started, and all consecutive failures are counted as retries.
A Dockerfile can have only one HEALTHCHECK directive. If there are more than one, only the last HEALTHCHECK takes effect.
The command in the first format can be either a shell command (for example, HEALTHCHECK CMD /bin/check-running) or an exec array.
The exit status of command reflects the health status of the Container,
- 0: success – the container is healthy and ready for use
- 1: unhealthy – the container is not working correctly
- 2: reserved – do not use this exit code
For example, check every 5 minutes to ensure that the Web server can serve the front page of the web site in 3 seconds,
HEALTHCHECK --interval=5m --timeout=3s \
CMD curl -f http://localhost/ || exit 1
Copy the code
To help debug failing probes, any output text written to STdout or Stderr (utF-8 encoding) is stored in the health state and can be queried using Docker Inspect. And the output should be short (currently only the first 4096 bytes will be stored).
When the health state of the Container changes, a HEALTH_STATUS event is generated with the new state.
SHELL
SHELL ["executable"."parameters"]
Copy the code
SHELL directives allow you to override the default SHELL of a shell-formatted command. Linux the default shell is [“/bin/sh “, “-c”], Windows default shell is [” CMD “, “/ S”, “/ c”]. SHELL directives must be written in JSON format in Dockfile.
SHELL directives are particularly useful on Windows, which has two different native shells in common use, CMD and Powershell, as well as SHELL alternatives, including sh.
SHELL instructions can appear more than once. Each SHELL instruction overrides all previous SHELL instructions, affecting subsequent ones.
The sample,
FROM microsoft/windowsservercore
# Executed as cmd /S /C echo default
RUN echo default
# Executed as cmd /S /C powershell -command Write-Host default
RUN powershell -command Write-Host default
# Executed as powershell -command Write-Host hello
SHELL ["powershell"."-command"]
RUN Write-Host hello
# Executed as cmd /S /C echo hello
SHELL ["cmd"."/S"."/C"]
RUN echo hello
Copy the code
Shell directives can affect RUN, CMD, and ENTRYPOINT in shell format when they appear in Dcokerfile.
Examples, common patterns on Windows, can be simplified by using SHELL instructions,
RUN powershell -command Execute-MyCmdlet -param1 "c:\foo.txt"
Copy the code
Docker calls the command,
cmd /S /C powershell -command Execute-MyCmdlet -param1 "c:\foo.txt"
Copy the code
This is a little inefficient for two reasons. First, an unnecessary cmd.exe command line processor (AKA shell) was called. Second, the shell-formatted RUN directive requires the extra prefix powershell-command.
To be more efficient, there are two mechanisms. One is to use the JSON format,
RUN ["powershell"."-command"."Execute-MyCmdlet"."-param1 \"c:\\foo.txt\""]
Copy the code
The JSON format is clear and does not use cmd.exe unnecessarily. But the need for double quotes and escapes is a bit redundant.
. The second is to use SHELL commands and SHELL formats to give Windows users a more natural syntax, especially when combined with the Escape Parser Directive.
# escape=`
FROM microsoft/nanoserver
SHELL ["powershell"."-command"]
RUN New-Item -ItemType Directory C:\Example
ADD Execute-MyCmdlet.ps1 c:\example\
RUN c:\example\Execute-MyCmdlet -sample 'hello world'
Copy the code
As a result,
PS E:\docker\build\shell> docker build -t shell. Sending build context to docker daemon 4.096 kB Step 1/5: FROM microsoft/nanoserver ---> 22738ff49c6d
Step 2/5 : SHELL powershell -command
---> Running in 6fcdb6855ae2
---> 6331462d4300
Removing intermediate container 6fcdb6855ae2
Step 3/5 : RUN New-Item -ItemType Directory C:\Example
---> Running in d0eef8386e97
Directory: C:\
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 10/28/2016 11:26 AM Example
---> 3f2fbf1395d9
Removing intermediate container d0eef8386e97
Step 4/5 : ADD Execute-MyCmdlet.ps1 c:\example\
---> a955b2621c31
Removing intermediate container b825593d39fc
Step 5/5 : RUN c:\example\Execute-MyCmdlet 'hello world'
---> Running in be6d8e63fe75
hello world
---> 8e559e9bf424
Removing intermediate container be6d8e63fe75
Successfully built 8e559e9bf424
PS E:\docker\build\shell>
Copy the code
SHELL instructions can also be used to modify the way the SHELL operates. For example in the Windows SHELL CMD/S/C/V: ON | OFF, you can change the environment variable delay extension semantics.
SHELL commands can also be used on Linux. The alternative shells are ZSH, CSH, TCSH, etc.
Dockerfile sample
# Nginx
#
# VERSION 0.0.1
FROM ubuntu
LABEL Description="This image is used to start the foobar executable" Vendor="ACME Products" Version="1.0"
RUN apt-get update && apt-get install -y inotify-tools nginx apache2 openssh-server
Copy the code
# Firefox over VNC
#
# VERSION 0.3
FROM ubuntu
# Install vnc, xvfb in order to create a 'fake' display and firefox
RUN apt-get update && apt-get install -y x11vnc xvfb firefox
RUN mkdir ~/.vnc
# Setup a password
RUN x11vnc -storepasswd 1234 ~/.vnc/passwd
# Autostart firefox (might not be the best way, but it does the trick)
RUN bash -c 'echo "firefox" >> /.bashrc'
EXPOSE 5900
CMD ["x11vnc"."-forever"."-usepw"."-create"]
Copy the code
# Multiple images example
#
# VERSION 0.1
FROM ubuntu
RUN echo foo > bar
# Will output something like ===> 907ad6c2736f
FROM ubuntu
RUN echo moo > oink
# Will output something like ===> 695d7793cbe4
# You'll now have two images, 907ad6c2736f with /bar, and 695d7793cbe4 with
# /oink.
Copy the code
See Resources for further reading.
- BuildKit (third-party tools)
- Syntax of the Parser cache (depends on BuildKit)
- RUN Known bug (Issue 783)
- External Implementation Features (rely on BuildKit)
- Automatic Platform ARGs in the Global Scope
The resources
Docs.docker.com/engine/refe…
Stay tuned for the next Dockerfile Best Practices article.
Copyright notice: this article is the blogger’s original article, please retain the original link and author.
If you like my article, please pay attention to the public number to support, thank you ha ha ha.