origin
A basic introduction to docker for docker-compose installation is beyond the scope of this article.
This article is basically a strict English-Chinese translation of the Docker-compose YAML file format. I did this yesterday because I thought of scanning the docker-compose layout for how to use ${PWD}. The Chinese version was not helpful, but the official website finally solved my ambiguity. Therefore, I think a rigorous translation and explanation should be made to explain the details of docker-compose arrangement.
Below, we will focus on the details of version 3 of the Docker-compose orchestration file format.
Reading this article, you should have a basic understanding of Docker-compose, or at least a basic understanding of the early (version 2) choreography format.
About the authorization
Translation from belong to the original docs.docker.com/compose/com… .
https://github.com/hedzr/docker-compose-file-format translation itself by MIT (ignoring hedzr. Making. IO platform level permission notice, follow the repo itself statements) distribution.
V3.8 instructions
Last time I did an old translation: docker-compose Choreography guide (V3.7). This is based on V3.7. This translation is an update to it. I have to say, it’s a little annoying.
Choreography version 3
history
Version 3 is the format docker-Compose has supported since docker-Engine 1.13. Before this, Docker launched swarm Mode in 1.12 to build virtual computing resources in a virtual network, and greatly improved docker’s network and storage support.
The following table (from the official website) provides a clear contrast between docker-compose and Docker-engine.
Compose file format | Docker Engine release |
---|---|
3.8 | 19.03.0 + |
3.7 | 18.06.0 + |
3.6 | 18.02.0 + |
3.5 | 17.12.0 + |
3.4 | 17.09.0 + |
3.3 | 17.06.0 + |
3.2 | 17.04.0 + |
3.1 | 1.13.1 + |
3.0 | 1.13.0 + |
2.4 | 17.12.0 + |
2.3 | 17.06.0 + |
2.2 | 1.13.0 + |
2.1 | 1.12.0 + |
2.0 | 1.10.0 + |
1.0 | 1.9.1. + |
specially add
Choreograph file structures with examples
Here is a sample of a typical file structure for version 3+ :
version: "3.7" # No problem for V3.8
services:
redis:
image: redis:alpine
ports:
- "6379"
networks:
- frontend
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
db:
image: Postgres: 9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
constraints: [node.role = = manager]
vote:
image: dockersamples/examplevotingapp_vote:before
ports:
- "5000:80"
networks:
- frontend
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
restart_policy:
condition: on-failure
result:
image: dockersamples/examplevotingapp_result:before
ports:
- "5001:80"
networks:
- backend
depends_on:
- db
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 1
labels: [APP=VOTING]
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
placement:
constraints: [node.role = = manager]
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role = = manager]
networks:
frontend:
backend:
volumes:
db-data:
Copy the code
In this sample, the top-level structure consists of version, Services, Networks, Volumes, and so on. This is not drastically different from version 2.
In the Services section, you can define several services, each of which typically runs a container, that constitute an overall stack of facilities, or cluster of services.
Generally we arrange a bunch of miscellaneous things, such as a bunch of microservices, into a service stack, so that they are served as a whole, so as not to expose the details, but also to enhance the architectural design flexibility and scale the entire service stack (rather than dealing with a large number of microservices).
[Dockerfile] aboutENTRYPOINT
和 CMD
Docker RUN vs CMD vs ENTRYPOINT – Go in Big Data
In a nutshell
- RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages.
- CMD sets default command and/or parameters, which can be overwritten from command line when docker container runs.
- ENTRYPOINT configures a container that will run as an executable.
Discuss these three commands in Dockerfile:
First, RUN does not consider its use as executing a Shell instruction.
CMD and ENTRYPOING are similar but actually different: CMD specifies the default command and its command-line arguments, or the end of the command-line arguments, and can be overridden when the container is started (as specified by an external command line). ENTRYPOINT specifies the start command when the container is run, with command-line arguments. This command, when actually executed, appends the content specified by CMD as the end of its command line.
Typical usage
So an idiom is:
ENTRYPOINT ["/bin/echo"."Hello"]
CMD ["world"]
Copy the code
The effect of executing the container then looks like this:
$ docker run -it test-container
Hello world
$ docker run -it test-container David
Hello David
Copy the code
For your container
So for custom containers, the path to your main application can be used as an ENTRYPOINT while providing default arguments in CMD:
ENTRYPOINT ["/usr/local/app/my-app"]
CMD ["--help"]
Copy the code
This will wrap your My-app like an exposed tool, showing its help screen by default, and you can specify parameters to run your my-App:
$ docker run -it my-app-container
[... helpscreen here ... ] $docker run -it my-app-container server run --foreground [my-app run --foreground]Copy the code
END
[Dockerfile] Build multiple times
Multiple builds are typically used for CI/CD.
For example, step 0 could be named Builder to compile from source to object file, while Step 1 would extract the object file from Step 0 for deployment packaging and generate the final container image, and then the middle layer of Step 0 would be discarded (only not packaged in the result container, In fact, these middle layers still exist in the Docker build cache and can be reused). These middle layers do not appear in the final container image, thus effectively reducing the size of the final container image, and the result is also semantically and logically self-consistent.
FROM Golang: 1.7.3 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
Copy the code
Formatting manual –service
Configure the reference
The following is a chapter structure of the reference manual. We list the instructions for service orchestration in alphabetical order, such as ports, Volumes, CMD, entry, and so on.
The Compose file is a YAML text file that defines service, networks, and volumes. By default docker-compose uses and retriys./ docker-comemess. yml files and interprets them.
Tip: You can always use.yml or.yaml as suffixes for this script file, and both will work correctly.
The configuration of a Service contains several definitions that specify how to run a container as a service, and these definitions are actually passed to Docker Run as part of its command-line arguments. In the same way, the definition of networks, volumes and so on adopts the same principle to affect the actual operation of commands such as Docker Network create or Docker volume Create.
You can use environment variables in configuration definition values, they have a similar syntax to BASH VARIABLE substitution, you can use ${VARIABLE}, see the VARIABLE substitution section for an in-depth discussion.
All valid service configuration items are listed in the following section.
build
This option is used to build.
Build can be a path string pointing to the build context, for example:
version: "3.8"
services:
webapp:
build: ./dir
Copy the code
Or it could be a more detailed definition. This includes the path specified by the context item, as well as the optional dockerFile and the build parameter args:
version: "3.8"
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate
args:
buildno: 1
Copy the code
If you specify an image as well as a build, the result of the build will be marked with the appropriate name, just as docker build-t container-name:tag dir does:
build: "./dir"
image: "Company/webapp: v1.1.9"
Copy the code
With YAML, a safe way to avoid ambiguity is to enclose strings in quotes.
In the example above, the build context is found in the./dir folder (the default is to find the Dockerfile), the build is completed, and it is marked with the company/ WebApp name and v1.1.9 Tag.
NOTE When used in docker Stack deployment scenarios:
The build option is ignored.
context
This can be a folder containing a Dockerfile or a URL pointing to a Git Repository.
If a relative path is specified, it is relative to the docker-comemage.yml file. This path is also passed to the Docker Daemon for building.
Docker-compose initiates the build action and marks the build result (by image name), and then uses it by the corresponding name.
build:
context: ./dir
Copy the code
dockerfile
You can specify a different file name than the default Dockerfile for the build. Note that you must also specify a path to the context:
build:
context: .
dockerfile: Dockerfile-alternate
Copy the code
args
Specify build parameters. This usually refers to parameters used at build time (see ARG in Dockerfile).
Here’s a quick overview:
First, specify parameters in Dockerfile:
ARG buildno
ARG gitcommithash
RUN echo "Build number: $buildno"
RUN echo "Based on commit: $gitcommithash"
Copy the code
Then specify the actual value of the build argument (either a Map or an array is fine) :
build:
context: .
args:
buildno: 1
gitcommithash: cdc3b19
Copy the code
Or:
build:
context: .
args:
- buildno=1
- gitcommithash=cdc3b19
Copy the code
NOTE: In a Dockerfile, if you specify an ARG before a FROM, the ARG is invalid for subsequent FROM closures.
Multiple FROM cuts out multiple constructed closures.
For the ARG to be valid in every FROM closure, you need to specify it in every closure.
There is a more detailed discussion in Understand how ARGS and FROM interact.
You can skip explicitly specifying build parameters. At this point, the actual value of this parameter depends on the build-time runtime environment.
args:
- buildno
- gitcommithash
Copy the code
NOTE: YAML boilers (true, false, yes, no, on, off) must be enclosed in quotes for docker-compose to handle correctly.
cache_from
Since v3.2
Specifies a list of images for caching resolution.
build:
context: .
cache_from:
- alpine:latest
- Corp/web_app: 3.14
Copy the code
labels
Since v3.3
Add metadata labels, either an array or a dictionary, to the built image.
We recommend using the reverse DNS annotated prefix to prevent your tag from colliding with the user’s tag:
build:
context: .
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
# anothor example
build:
context: .
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
Copy the code
network
Since v3.4
Sets the network to link to during the RUN build, which will also be used to query and extract dependent containers.
build:
context: .
network: host
build:
context: .
network: custom_network_1
Copy the code
Set to None to disable network query and extraction at build time:
build:
context: .
network: none
Copy the code
shm_size
Since v3.5
Set the /dev/shm partition size for the build container. The integer format is expressed in bytes, but a string value can also be used:
build:
context: .
shm_size: '2gb'
build:
context: .
shm_size: 10000000
Copy the code
target
Since v3.4
Build definitions and specific stages in dockerfiles, see multi-stage Build docs:
build:
context: .
target: prod
Copy the code
cap_add
.cap_drop
Add or remove Linux capabilities for containers. For a complete listing, see Man 7 Capabilities.
cap_add:
- ALL
cap_drop:
- NET_ADMIN
- SYS_ADMIN
Copy the code
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
The Linux capability mechanism is largely a security mechanism. The specific meaning, usage and extension belong to the category of Linux operating system and will not be described again.
cgroup_parent
Optionally, specify a parent Cgroup for the container. Cgroup is also one of the most important basic concepts in Linux containerization implementations.
cgroup_parent: m-executor-abcd
Copy the code
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
command
Override the default commands in the container.
command: bundle exec thin -p 3000
Copy the code
Command can also be specified as a list. In fact, this is the preferred way, unambiguous and secure, and consistent with the format in dockerfile:
command: ["bundle", "exec". "thin". "-p". "3000"]
Copy the code
configs
Provide specific access to Config for each service.
A config contains a set of configuration information that can be created in a variety of ways. Deployment of containers that reference these configurations can better address issues such as production environment parameters. On the other hand, sensitive information can be isolated into a secure area, reducing the possibility of leakage to some extent.
NOTE: The specified configuration must already exist or be defined in the top-level Configs configuration. Otherwise the deployment of the entire container stack will fail.
Two different syntax variant formats are supported. Refer to configS for more information.
The short format
Specify only the configuration name. The container can then access the configuration /
version: "3.8"
services:
redis:
image: redis:latest
deploy:
replicas: 1
configs:
- my_config
- my_other_config
configs:
my_config:
file: ./my_config.txt
my_other_config:
external: true
Copy the code
The example above defines references to my_config and my_other_config in the redis container service using a short format. Here my_config is specified as a host file./my_config.txt, and my_other_config is specified as an external (resource), which means that the corresponding resource is already defined in the Docker. It may be created by docker Config Create, or by another container stack deployment.
If the external resource is not found, the container stack deployment will fail and a config not found error will be thrown.
Note: The config definition is only supported in the Docker-compose format in V3.3 and later.
Long format
The long format provides more information about where a config can be found and how it can be used:
-
Source: configuration name
-
Target: The path to which the configuration will be mounted in the container. Defaults to / < source >
-
Uid & GID: Linux/Unix uid and GID with numeric values, 0 if not specified. Windows does not support it.
-
Mode: indicates the file permission in base 8. The default value is 0444.
Configurations are not writable because they are mounted on a temporary file system. So if you set write permission, this will be ignored.
The executable bit can be set.
The following example is similar to the short format example:
version: "3.8"
services:
redis:
image: redis:latest
deploy:
replicas: 1
configs:
- source: my_config
target: /redis_config
uid: '103'
gid: '103'
mode: 0440
configs:
my_config:
file: ./my_config.txt
my_other_config:
external: true
Copy the code
In this case, the redis container service does not access my_other_config.
You can authorize a service to access multiple configurations, and you can mix long and short formats.
Defining a config (at the top level) does not imply that a service will be able to access it.
container_name
Specify a custom container name instead of docker-compose itself generating a default one.
container_name: my-web-container
Copy the code
Since Docker container names must be unique, you cannot scale a service with a custom container name.
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
credential_spec
Since v3.3
The gMSA (Group Managed Service Account) mode is supported from V3.8.
Configure credentials for managed service accounts. This option is only available for Windows container services. Only file://
or registry://
can be used for credential_spce.
When using file:, the referenced file must be placed under the CredentialSpec subdirectory of the Docker data folder (usually C: ProgramData Docker\). The following example will load credential information from C: ProgramData Docker CredentialSpecs my-credential-sp:
credential_spec:
file: my-credential-spec.json
Copy the code
When registry: is used, credential information will be read in from Windows Registry on the Docker Daemon host. A registry entry must be located at:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs
Copy the code
.
The following example reads the value of the my-credential-spec registry key:
credential_spec:
registry: my-credential-spec
Copy the code
GMSA configuration example
When configuring gMSA credentials for a service, consider the following example:
version: "3.8"
services:
myservice:
image: myimage:latest
credential_spec:
config: my_credential_spec
configs:
my_credentials_spec:
file: ./my-credential-spec.json|
Copy the code
depends_on
Represents dependencies between services. Service dependencies cause the following behavior:
docker-compose up
Start services in order of dependency. In the following example,db
和redis
beforeweb
Be started.docker-compose up SERVICE
Automatically includedSERVICE
The dependency of. In the following example,docker-compose up web
It will start automaticallydb
和redis
.docker-compose stop
Stop services in order of dependency. In the following example,web
Will be precededdb
和redis
Be stopped.
A simple example is as follows:
version: "3.8"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
Copy the code
A few things to note when using Depends_on:
Depends_on does not mean to start the Web after DB and Redis are ready, but after they are started. If you want to wait until the service is ready to be used, see Controlling Startup Order.
Version 3 no longer supports the condition statement.
The depends_on option is ignored when deployed to Swarm Mode.
See also debug a stack in swarm mode
deploy
Version 3 only.
Specify configurations related to deployment and runtime.
This will only affect swarm deployed to a docker stack deploy.
Docker-compose up and docker-compose run are ignored.
version: "3.8"
services:
redis:
image: redis:alpine
deploy:
replicas: 6
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
Copy the code
Several suboptions are available:
endpoint_mode
swarm.
Since Version 3.2.
Specifies the service discovery method to use when external clients connect to a Swarm cluster.
-
Endpoint_mode: VIP-docker requests a virtual IP (VIP) for access for the service.
Docker automatically loads and balances requests between the client and the valid working nodes of the service. The client does not need to know how many nodes are available for the service, nor the IP address or port number of the service node.
This is the default.
-
Endpoint_mode: DNSRR – Uses the DNS round-robin (DNSRR) algorithm to discover services. Docker sets a DNS entry for the service and returns a list of IP addresses by service name during DNS resolution. The client therefore directly selects a specific endpoint for access.
version: "3.8"
services:
wordpress:
image: wordpress
ports:
- "8080:80"
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: vip
mysql:
image: mysql
volumes:
- db-data:/var/lib/mysql/data
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
volumes:
db-data:
networks:
overlay:
Copy the code
The endpoint_mode option is similarly used as a swarm mode command-line option (see Docker service Create). For a quick list of docker swarm commands, check out swarm Mode CLI Commands.
To learn more about Swarm Mode’s network model and service discovery mechanism, see Configure Service Discovery.
labels
Specify a label for the service. These labels are only applied to the corresponding service and are not applied to the container or container instance of the service.
version: "3.8"
services:
web:
image: web
deploy:
labels:
com.example.description: "This label will appear on the web service"
Copy the code
To set labels for containers, specify labels for services outside deploy:
version: "3.8"
services:
web:
image: web
labels:
com.example.description: "This label will appear on all containers for the web service"
Copy the code
mode
It can be global or replicated. Global means strictly a swarm node running a service, replicated means multiple container instances can be run. The default is replicated.
See Replicated and Global Services under swarm.
version: "3.8"
services:
worker:
image: dockersamples/examplevotingapp_worker
deploy:
mode: global
Copy the code
placement
Specify aints and preferences.
See the documentation created by the Docker service for more information about constraints and [Preferences], including a complete description of the corresponding syntax, the types available, and so on.
version: "3.8"
services:
db:
image: postgres
deploy:
placement:
constraints:
- node.role = = manager
- engine.labels.operatingsystem = = ubuntu 14.04
preferences:
- spread: node.labels.zone
Copy the code
max_replicas_per_node
Since version 3.8.
If a service is replicable (which is the default), max_replicas_per_node will limit the number of replicas.
A no suitable node (Max replicas per node limit exceed) error will be thrown when too many tasks request new task nodes and the max_replicas_per_node limit is exceeded.
version: "3.8"
services:
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 6
placement:
max_replicas_per_node: 1
Copy the code
replicas
If the service is replicated, Replicas assigns it a number indicating how many container instances can run on a swarm node.
version: "3.7"
services:
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 6
Copy the code
resources
Configure resource constraints.
NOTE: For non-swarm mode, This entry replaces older resource constraint options (such as CPU_shares, CPU_quota, CPUSet, mem_limit, memSWAP_limit, Entries such as mem_swappiness in versions before version 3).
Upgrade version 2. X to upgrade version 3.
Each of these resource constraint entries has a single value, equivalent to the docker Service Create equivalent.
In the following example, the Redis service is constrained to not use more than 50M memory, 50% CPU usage for single core, and reserves 20M memory and 25% CPU usage as the baseline.
version: "3.8"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
Copy the code
The following topics describe the available options for service or container resource constraints in swarm scenarios.
Out Of Memory Exceptions (OOME)
You will get an Out Of Memory Exception (OOME) if you attempt to use more Memory than the system has in your service and container instances. At this point, container instances, or Docker Daemons, may be cleared by the kernel’s OOM manager.
To prevent this from happening, make sure your application uses memory efficiently and legally. For such risks, see Understand the Risks of Running out of Memory for further assessment tips.
restart_policy
Indicates how to restart a container instance when it exits. Replace the restart:
condition
: can I do fornone
.on-failure
或any
(the default isany
)delay
: Wait time before trying to restart (default: 0). One should be specified for itduration.max_attempts
: How many times does the restart attempt have to be aborted. The default value is never give up.window
: Duration of waiting to determine whether a restart is successful. By default, success is considered immediately without waiting. One should be specified for itduration.
version: "3.8"
services:
redis:
image: redis:alpine
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
Copy the code
rollback_config
Version 3.7 file format and up
How the service should be rolled back in the case of a rolling update failure:
parallelism
: Number of containers to be rolled back at the same time. If set to 0, all containers are rolled back at the same time.delay
: Wait time for each container group to be rolled back (default: 0)failure_action
: An action that should be performed if a rollback fails. Can becontinue
或pause
(the default ispause
)monitor
: Cycle at which the failed rollback status is updated to the monitor (ns|us|ms|s|m|h
The default for0s
.max_failure_ratio
: Tolerable percentage of rollback failures (default: 0)order
: Rollback operation sequence. Can I do forstop-first
或start-first
(the default isstop-first
)
update_config
Indicates how the service should be updated. This is useful when configured for rolling updates:
parallelism
: The number of containers updated at the same time. If set to 0, all containers are rolled back at the same time.delay
: Wait time before each container group is updated (default: 0)failure_action
: Action that should be performed when an update fails. Can becontinue
.rollback
或pause
(the default ispause
)monitor
: Cycle at which failed update status is updated to the monitor (ns|us|ms|s|m|h
The default for0s
.max_failure_ratio
: Tolerable percentage of updates that fail (default: 0)order
: Update operation sequence. Can I do forstop-first
或start-first
(the default isstop-first
)
NOTE: Order only works in V3.4 and later.
version: "3.8"
services:
vote:
image: dockersamples/examplevotingapp_vote:before
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
order: stop-first
Copy the code
DOCKER STACK DEPLOY
Don’t supporters
The following suboptions (supported by docker-compose UP and docker-compose run) are not supported in docker stack deploy:
- build
- cgroup_parent
- container_name
- devices
- tmpfs
- external_links
- links
- network_mode
- restart
- security_opt
- userns_mode
Tip: Docker-stack. yml – How to configure volumes for services swarms And docker-stack.yml files section. Volumes are supported in swarms and Services, but you can only use named Volumes or services that are bound to nodes that have access to the required Volumes.
devices
List of devices to be mapped. The usage is the same as –device for the docker command.
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
Copy the code
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
dns
User-defined DNS server list. You can specify a single value or a list.
DNS: 8.8.8.8 DNS: -8.8.8.8-9.9.9.9Copy the code
dns_search
User-defined DNS search domain name. You can specify a single value or a list.
dns_search: example.com
dns_search:
- dc1.example.com
- dc2.example.com
Copy the code
entrypoint
Overrides the default entryPoint value defined in dockerFile.
entrypoint: /code/entrypoint.sh
Copy the code
Entry points can also be a list:
entrypoint:
- php
- -d
- zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20100525/xdebug.so
- -d
- memory_limit=-1
- vendor/bin/phpunit
Copy the code
NOTE: Setting an entryPoint not only overrides any entryPoint defaults in the Dockerfile, but also cleans up any CMD defaults in the Dockerfile. So CMD in Dockerfile will be ignored.
env_file
Imports environment variable values from the given file. It can be a single value or a list.
env_file: .env
env_file:
- ./common.env
- ./apps/web.env
- /opt/secrets.env
Copy the code
For docker-comement-f files, env_file’s path is relative to the folder where the FILE resides.
The environment variables declared in the environment will override the values introduced here.
Each line in the corresponding file should define an environment variable using the VAR=VAL format. A line starting with # is a comment line and is ignored as a blank line.
# Set Rails/Rack environment
RACK_ENV=development
Copy the code
NOTE: If the service defines a build item, the environment variables defined by env_file are not visible during the build process. You can only use the build suboption args to define build-time environment variable values.
The value of VAL is used as is and cannot be modified. For example, if a value is surrounded by quotes, then the value’s representation also contains quotes.
The order of the environment variable files also needs to be noted. The variable values defined in the later environment variables file overwrite the old values defined earlier.
Press: original unexpectedly said so much!
Keep in mind that the order of files in the list is significant in determining the value assigned to a variable that shows up more than once. The files in the list are processed from the top down. For the same variable specified in file
a.env
and assigned a different value in fileb.env
, ifb.env
is listed below (after), then the value fromb.env
stands. For example, given the following declaration indocker-compose.yml
:
services:
some-service:
env_file:
- a.env
- b.env
Copy the code
And the following files:
# a.env
VAR=1
Copy the code
and
# b.env
VAR=hello
Copy the code
$VAR
is hello
.
environment
Add environment variables. You can use an array or a dictionary. Any Booleans: true, false, yes, no, and so on must be quoted as string literals.
The value value of the key-only environment variable depends on the host environment of the Docker-compose runtime, which is useful to prevent sensitive information from leaking.
environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
# or
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
Copy the code
NOTE: If the service defines a build item, the environment variables defined by env_file are not visible during the build process. You can only use the build suboption args to define build-time environment variable values.
expose
Expose ports to linked services. These ports are not published to the host. Only internal ports can be specified to be used for exposure.
expose:
- "3000"
- "8000"
Copy the code
external_links
Link containers started outside of docker-comemage.yml to the given service.
It has similar semantics to the legacy links option.
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql
Copy the code
Note
In order to connect to a service defined in docker-comemage.yml, The farms -created containers built outside of docker-comemage.yml must be connected to at least one of The same networks. The Links option is outdated and we recommend using Networks instead.
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
A more recommended approach is to construct a subnet through networks for linking between containers.
extra_hosts
Add a host name mapping. These mappings will be added to /etc/hosts. This function is equivalent to the command line argument –add-host.
extra_hosts:
- "Somehost: 162.242.195.82"
- "Otherhost: 50.31.209.229"
Copy the code
For this service, the corresponding host name and its IP will be established as an entry in the /etc/hosts file in the container. Such as:
162.242.195.82 somehost
50.31.209.229 otherhost
Copy the code
healthcheck
Since v2.1
Used to verify that a service is “healthy”. Refer to HEALTHCHECK Dockerfile instruction.
healthcheck:
test: ["CMD", "curl". "-f". "http://localhost"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
Copy the code
Interval, timeout, and start_period should be specified as durations.
Note: start_period is only available in V3.4 and later.
Test must be a single string value or a list. If it is a list, the first item must be NONE, CMD, one of cmd-shells. If it is a string, it implicitly represents a CMD-shell prefix.
# Hit the local web app
test: ["CMD", "curl". "-f". "http://localhost"]
Copy the code
As in the example above, but implicitly calling /bin/sh is equivalent to the following form.
test: ["CMD-SHELL", "curl -f http://localhost || exit 1"]
test: curl -f https://localhost || exit 1
Copy the code
To disable any default health check directions specified in the image, use disable: true. This is equivalent to specifying test: [“NONE”].
healthcheck:
disable: true
Copy the code
image
Specify the name of the image.
image: redis
image: Ubuntu: 14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd
Copy the code
If the image doesn’t exist on the host, Compose will try to pull it down unless you also specify a build item.
init
Since v3.7
Run an init process in the container and forward signals. Setting true enables this feature for the service.
version: "3.8"
services:
web:
image: alpine:latest
init: true
Copy the code
The default init process uses the binary execution file Tini, which is installed at /usr/libexec/docker-init on the daemon host if needed. You can also configure daemons to use a different binary through init-path, see Configuration Option.
isolation
Specify the isolation level/technology for a container. In Linux, only the default value is supported. On Windows, the acceptable values are default, process, and Hyperv.
labels
To add metadata labels to containers, see Docker Labels. You can specify an array or a dictionary for it.
We recommend that you use reverse DNS tagging to define your tags, which can effectively avoid tag name conflicts.
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
Copy the code
links
It’s already a legacy feature. It will be removed in the near future.
Press: so I am not accurate translation 🙂 too wordy.
Link another service to this container. You can specify both the SERVICE name and the link ALIAS (SERVICE:ALIAS), or you can skip the link ALIAS.
web:
links:
- db
- db:database
- redis
Copy the code
Services that are already linked in will be accessible by the host name (that is, the link ALIAS).
Links are not necessary for inter-service communication. By default, any service can access other services by service name. See Links topic in Networking in Compose.
A link also represents a dependency, but this is already a depends_on task, so the link is not necessary.
logging
Specify log forwarding configuration for the service.
logging:
driver: syslog
options:
syslog-address: "TCP: / / 192.168.0.42:123"
Copy the code
Driver specifies the driver name, which is equivalent to –log-driver.
The default value is json-file.
driver: "json-file"
driver: "syslog"
driver: "none"
Copy the code
Note
When retrieving logs using Docker-compose up and Docker-compose logs, only jSON-file and Journald driver formats are printed to the console, and other log drivers forward the logs to their respective destinations. No log output information will be retrieved locally.
Forward the available drives can reference docs.docker.com/config/cont…
Specify the options for the drive with option, as in –log-opt. For example, for syslog:
driver: "syslog"
options:
syslog-address: "TCP: / / 192.168.0.42:123"
Copy the code
The default log forwarding driver is json-file. You can specify the log cutting size and maximum number of log history files to keep:
options:
max-size: "200k"
max-file: "10"
Copy the code
In the example shown above, the log’s file storage will be limited to 200kB and truncated to the old history log when it is exceeded. Such history files will be limited to no more than 10 and older history files will be discarded.
Here is a complete example of the docker-comemess.yml file that demonstrates how to limit log storage space:
version: "3.8"
services:
some-service:
image: some-service
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
Copy the code
The logging options available effectively depend on the logging driver being used.
The logging option that controls the number and size of log files is applicable to the JSON-file driver. They may not be suitable for other logging drivers. For a complete list of logging options available for each logging driver, refer to the Logging Drivers documentation.
network_mode
Network model.
The value is the same as –network. However, the service:[Service name] mode is supported.
network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"
Copy the code
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
NOTE: network_mode: “host” cannot be mixed with links.
networks
The network to join. The target network is defined in the top-level networks TAB of docker-comemage. yml.
services:
some-service:
networks:
- some-network
- other-network
Copy the code
ALIASES
Specify an alias (that is, hostname) for the service on the network. Other containers on the same network can use the service name or service alias to connect to the container instance of the service.
Since aliases are within the network scope (network domain), the same service can have different aliases on different networks.
Note
A network domain alias can be shared by multiple containers, or even multiple services — you can use aliases with the same name between different containers and services. If you do, there is no guarantee that the alias will be resolved to which specific container.
The alias definition looks like this:
services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias3
other-network:
aliases:
- alias2
Copy the code
A more complex and complete example: the following three services are provided: Web,worker, and DB, belonging to two networks: New and Legacy. The DB ‘service can be accessed through the host name DB or database on the new network or the host name DB or mysql on the Legacy network.
version: "3.8"
services:
web:
image: "nginx:alpine"
networks:
- new
worker:
image: "my-worker-image:latest"
networks:
- legacy
db:
image: mysql
networks:
new:
aliases:
- database
legacy:
aliases:
- mysql
networks:
new:
legacy:
Copy the code
IPV4_ADDRESS, IPV6_ADDRESS
Specify a static IP address.
Notice In the corresponding top-level network configuration, an IPAM block must be configured for the subnet and the static IP address must meet the subnet definition.
If IPv6 addressing is desired, the
enable_ipv6
option must be set, and you must use a version 2.x Compose file. IPv6 options do not currently work in swarm mode.
One example is:
version: "3.8"
services:
app:
image: nginx:alpine
networks:
app_net:
ipv4_address: 172.16238.10.
ipv6_address: 2001: : 3984-3989:10
networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
- subnet: "2001, 3984:3989: : / 64"
Copy the code
pid
pid: "host"
Copy the code
Set the PID mode for the service to use the host. This enables server processes within the container to share the PID address space with the host operating system level. This is a typical Linux/Unix operating system concept, so I won’t expand on it here. Such sharing enables secure IPC communication with the PID address space.
ports
Expose ports to the host machine.
Note: Port exposure is incompatible with network_mode: host.
The short format
You can specify both the HOST and CONTAINER port (HOST:CONTAINER) to complete the mapping, or you can specify only the CONTAINER port to automatically map to a temporary port (starting with 32768) for the same HOST port.
Note
When using the HOST:CONTAINER format to map ports, you may receive an error if the CONTAINER port used is less than 60. This is because the YAML parser treats the format xx: YY as a number in base 60. It’s ridiculous, isn’t it? For this reason, we recommend that you always use quotation marks around the port number, making it a string value, so as not to get the desired response.
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"
Copy the code
Long format
Verbose definitions are allowed:
target
: Specifies the port number of the containerpublished
: Specifies the port number to expose to the Docker hostprotocol
: protocol (tcp
orudp
)mode
:host
The port number of each node is published as the host port,ingress
Swarm-specific: swarm cluster, all nodes’ ports will be load balanced as host ports.
ports:
- target: 80
published: 8080
protocol: tcp
mode: host
Copy the code
It’s obvious, so I’ll skip the explanation.
NOTE: Long format only works in V3.2 and later.
restart
No is the default restart policy. No matter how the container exits or fails, it will not be restarted automatically.
Specifying always will restart the container in any case.
An on-failure policy allows the container to be restarted only when it fails to exit.
restart: "no"
restart: always
restart: on-failure
restart: unless-stopped
Copy the code
NOTE: These options are ignored when deploying a stack to Swarm Mode. (Use restart_policy to do this.)
See also debug a stack in swarm mode
secrets
From each service configuration, grant access to the top-level Secrets defined entry. Supports both short and long formats.
The short format
The short format only specifies the name of the sensitive content. This enables the container to mount the corresponding content to the /run/secrets/
location and access it.
The following example uses the short format to give Redis access to my_secret and my_other_secret. The contents of my_secret are defined in./my_secret.txt, and my_other_secret is defined as an external resource, such as the docker secret create. If no corresponding external resource is found, the stack deployment will fail and throw a Secret Not Found error.
version: "3.8"
services:
redis:
image: redis:latest
deploy:
replicas: 1
secrets:
- my_secret
- my_other_secret
secrets:
my_secret:
file: ./my_secret.txt
my_other_secret:
external: true
Copy the code
Long format
The long format allows for a finer definition of how sensitive content is used within the stack container.
source
: The name of the sensitive content defined in Docker.target
: Is mounted to the container/run/secrets/
File name in. If not specified, it is usedsource
Name.uid
&gid
: UID and GID of the files mounted in the container. 0 if not specified. Invalid on Windows.mode
: Octal permissions for files mounted within the container. The default in Docker 1.13.1 is0000
, but in newer versions0444
. The mounted file is not writable. The execution bit can be set, but is generally meaningless.
Here’s an example:
version: "3.8"
services:
redis:
image: redis:latest
deploy:
replicas: 1
secrets:
- source: my_secret
target: redis_secret
uid: '103'
gid: '103'
mode: 0440
secrets:
my_secret:
file: ./my_secret.txt
my_other_secret:
external: true
Copy the code
Long and short formats can be mixed if you are defining multiple sensitive content.
security_opt
Override the default label semantics for each container.
security_opt:
- label:user:USER
- label:role:ROLE
Copy the code
This is usually associated with SecCOMP, which can be a lengthy topic related to security configuration, so I won’t expand on it here.
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
stop_grace_period
Specifies a wait period after which the container will be forced to clear the corresponding process of the container instance (via the SIGKILL signal) if the container fails to block the SIGTERM signal (or any other signal defined by stop_signal) to shut itself down properly.
stop_grace_period: 1s
stop_grace_period: 1m30s
Copy the code
By default, it will wait for 10 seconds.
stop_signal
Set an alternate signal to shut down the container instance normally. The SIGTERM signal is used by default.
stop_signal: SIGUSR1
Copy the code
sysctls
Set the kernel parameters for the container. You can use an array or a dictionary.
sysctls:
net.core.somaxconn: 1024
net.ipv4.tcp_syncookies: 0
sysctls:
- net.core.somaxconn=1024
- net.ipv4.tcp_syncookies=0
Copy the code
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
tmpfs
The commentary on V2 has been removed from the original version of V3.8:
since v2
Mount a temporary file system to the container. It can be a single value or a list.
tmpfs: /run tmpfs: - /run - /tmp Copy the code
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
Since v3.6
Mount a temporary file system to the container. You can use a single value or an array.
tmpfs: /run
tmpfs:
- /run
- /tmp
Copy the code
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
Mount a temporary file system to the container. The Size parameter specifies the Size in bytes of the file system Size. The default value is unlimited.
- type: tmpfs
target: /app
tmpfs:
size: 1000
Copy the code
ulimits
Overrides the default Ulimits value specified in the container. You can specify an integer as a single limit limit or a mapping to represent the soft/hard limit limits respectively.
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
Copy the code
userns_mode
userns_mode: "host"
Copy the code
Disable user namespaces. If the Docker Daemon is configured to run in a User namespace.
NOTE: These options are ignored when deploying a stack to Swarm Mode.
See also debug a stack in swarm mode
volumes
Mount the host path or name the volume.
You can mount a host path to a service without having to define it in top-level volumes.
If you want to reuse a volume to multiple services, define it in your top-level Volumes and name it.
You can use named volumes in Services, Swarms, and Stack Files.
NOTE: Define a named volume in the top-level volumes and reference it in the volumes list for a service.
The earlier volumes_from is no longer used.
See Use Volumes and Volume Plugins.
The following example illustrates a named volume, my_data, being used for web services. On the Web, also use a host folder./static to mount inside the container; A host file is mounted in db to the corresponding file in the container, and another named volume dbData is used.
version: "3.8"
services:
web:
image: nginx:alpine
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
db:
image: postgres:latest
volumes:
- "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock"
- "dbdata:/var/lib/postgresql/data"
volumes:
mydata:
dbdata:
Copy the code
Note
For more information on volumes, see the Use Volumes and Volume Plugins sections.
The short format
You can use the HOST:CONTAINER format, or attach an access mode HOST:CONTAINER:ro.
The syntax for the short format is [SOURCE:]TARGET[:MODE]. SOURCE can be a host path or a volume name. TARGET is a container path T to which the host path will be mounted. MODE can be ro or RW, which stands for read-only and read-write respectively.
You can mount a host relative path that is expanded relative to the Docker compose file. The relative path should always start with. Or.. Start.
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql
# Specify an absolute path mapping
- /opt/data:/var/lib/mysql
# Path on the host, relative to the Compose file
- ./cache:/tmp/cache
# User-relative path
- ~/configs:/etc/configs/:ro
# Named volume
- datavolume:/var/lib/mysql
Copy the code
Long format
The long format allows for finer control.
type
: Indicates the mount typevolume
.bind
.tmpfs
和npipe
source
: Mount source location. This can be a host path, a volume name defined in top-level volumes, and so on. If mounttmpfs
This parameter is meaningless.target
: Mount point path inside the container.read_only
: Boolean value to set the writability of the volume.bind
: Configures the additional bind option.propagation
: Propagation mode for BIND.
volume
: Configures additional volume optionsnocopy
: Boolean to disable data replication (by default, when a volume is first created, the contents of the container are copied to the volume)
tmpfs
: Configures additional TMPFS optionssize
: TMPFS capacity, in bytes.
consistency
: Mount consistency requirements:consistent
Hosts and containers have the same view,cached
Read operations are buffered, the host view is the main,delegated
Read and write operations are buffered and the container view is the body.
version: "3.8"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
Copy the code
Long format is available after V3.2Note
Long format syntax requires that you have the host folder ready in advance to bind and mount the host directory to the container. With the short format, the corresponding folder will be created in place if it doesn’t already exist. See bind mounts Documentation for more information.
VOLUMES FOR SERVICES, SWARMS, AND STACK FILES
When working in services, swarms, or docker-stack.yml scenarios, note that a service in swarm can be deployed to any node, and when the service is updated, it may not be on the original node.
When the specified volume does not exist, Docker automatically creates an anonymous volume for the referenced service. Anonymous volumes are non-persistent, so when the associated container instance exits and is removed, the anonymous volume is destroyed.
If you want to persist your data, use named volumes and choose the appropriate volume driver, which should be cross-host so that data can roam between hosts. Otherwise, you should set constraints on the service so that it will only be deployed to specific nodes on which the corresponding volume service is working correctly.
As an example, votingApp Sample in Docker Labs’ Docker-stack. yml file defines a DB service that runs PostgresQL. It uses a named volume db-data to persist database data. This volume is bound by swarm to run only on the manager node, so no problems are solved. Here is the source code:
version: "3.8"
services:
db:
image: Postgres: 9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
constraints: [node.role = = manager]
Copy the code
CACHING OPTIONS FOR VOLUME MOUNTS (DOCKER DESKTOP FOR MAC)
In Docker 17.04 CE Edge and later versions (even 17.06CE Edge and Stable), you can configure consistency constraints on how volumes are synchronized between containers and hosts. These symbols include:
-
Consistent is completely consistent. The default policy is that hosts and containers have the same view.
-
Cached host prevails. The read operation to the volume is buffered, the host view is the body,
-
Delegated containers prevail. Read and write operations to volumes are buffered, and the container view is the body.
This is specially adapted for Docker Desktop for Mac. Due to what is known about osXFX’s file-sharing features, properly setting conformance flags can improve performance when accessing mounted volumes inside and outside the container.
Here is an example of a cached volume:
version: "3.7"
services:
php:
image: PHP: 7.1 - FPM
ports:
- "9000"
volumes:
- .:/var/www/project:cached
Copy the code
In the case that all reads and writes are buffered, even if any changes occur in the container (which is often possible to write to a typical architecture such as PHP Website), they are not immediately visible to the host and the writes in the container will pile up.
To check whether a volume is consistent within or outside the container, see Performance Tuning for Volume mounts (Shared filesystems).
I couldn’t translate it exactly here because it would have been too long, and I haven’t been able to organize the language on the subject.
domainname
.hostname
.ipc
.mac_address
.privileged
.read_only
.shm_size
.stdin_open
.tty
.user
.working_dir
These configurations have a single value. Corresponds to the corresponding command line argument of docker run. Mac_address has been abandoned.
user: postgresql
working_dir: /code
domainname: foo.com
hostname: foo
ipc: host
mac_address: 02:42:ac:11:65:43
privileged: true
read_only: true
shm_size: 64M
stdin_open: true
tty: true
Copy the code
Duration specified
Some configuration options, such as interval or timeout (both suboptions of Check), accept a string-style value for a time period or period. They should have this format:
2.5s
10s
1m30s
2h32m
5h34m56s
Copy the code
Suffixes that can be appended to numeric values are us, ms, S, m, and h.
The meaning is self-evident.
Specifying byte values
Some configuration options, such as the build suboption shm_size, accept a string-delimited capacity size parameter value. They should have this format:
2b
1024kb
2048k
300m
1gb
Copy the code
Valid suffix units include b, k, m, and g. In addition, KB, MB and GB are also legal. Pure decimal values are not legal.
Volume formatting manual –volumes
The top-level volumes section can declare and create named volumes (without using volume_from) that can be referenced in the Volumes section of the Service section. So we can reuse them, even across multiple services. The Docker volume subcommand of the docker command has more reference information.
You can also refer to Use volumes and Volume Plugins for Volume usage.
Here is an example that contains two services. The database’s data store folder is shared between the two services, so the database can use the storage folder, and the backup service can also manipulate it to complete the backup task:
version: "3.8"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data
volumes:
data-volume:
Copy the code
The entries under the top-level Volumes section can be empty without specifying details, so the default volume driver will be applied (usually the local volume driver).
But you can also customize it with the following parameters:
driver
Specifies which volume driver will be used. In general, the default value will be local. If the volume driver is invalid or not working, the Docker Engine will return an error when docker-compose is up.
driver: foobar
Copy the code
driver_opts
Optionally specify a set of key-value pair parameters that will be passed to the volume driver. So these parameter sets are specific to the volume driver, please refer to the volume driver documentation.
volumes:
example:
driver_opts:
type: "nfs"
o: "Addr = 10.40.0.199 nolock, soft, rw." "
device: ":/docker/example"
Copy the code
external
If set to true, the corresponding volume is ready to be created outside of the compose orchestration file. Docker-compse Up will not attempt to create the volume and will return an error if the volume does not already exist.
For V3.3 and lower versions of the compose format, external cannot be used in combination with other volume configuration parameters, such as driver, Driver_opts, labels, and so on. However, this restriction is no longer applicable to V3.4 and later.
In the following example, Compose looks for an external volume named data and mounts it to the DB service instead of trying to create a new volume named [projectName]_data.
version: "3.8"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
external: true
Copy the code
External. name has been deprecated in v3.4+, after which name is used directly.
You can also specify a separate volume name (in this case, data is considered to be the volume alias when the volume is referenced in the current orchestration file) :
volumes:
data:
external:
name: actual-name-of-volume
Copy the code
External volumes are always created with docker stack deploy
When docker Stack deploy is deployed into Swarm, external volumes are always created automatically if they don’t exist. For further information, see Moby/Moby #29976,
labels
Use Docker Labels to add metadata to containers. It can be array format or dictionary format.
We recommend that you use reverse DNS tagging to prefix your metadata table keys with reverse domain names to avoid potential collisions with other applications’ table keys of the same name:
labels:
com.example.description: "Database volume"
com.example.department: "IT/Ops"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Database volume"
- "com.example.department=IT/Ops"
- "com.example.label-with-empty-value"
Copy the code
name
Since v3.4 +
Specify a custom name for the volume. The value of the name can be used to resolve volumes with special character names. Note that this value is used as is, and the quotes are not ignored or prefixed with the stack name.
version: "3.8"
volumes:
data:
name: my-app-data
Copy the code
Name can be combined with external:
version: "3.8"
volumes:
data:
external: true
name: my-app-data
Copy the code
Web Formatting Manual –networks
Top-level section Networks allows you to configure the network you want to create and use (the Compose Intranet).
- For a complete description of using Docker network environment features in Compose, and all network driver options, see the Networking Guide.
- For Docker Labs’ network-related coaching use cases, read up on Designing Scalable, Portable Docker Container Networks.
driver
Specify the driver for the network.
The default driver is specified by the startup parameter of the Docker Engine. Typically, the startup parameters are built in to use the Bridge driver on a single-node host and the overlay driver in Swarm Mode.
If the driver is not available, Docker Engine will return an error.
driver: overlay
Copy the code
bridge
By default Docker uses the Bridge driver on each host node. For information on how Bridge networking works, see Docker Labs’ network-related tutorial use case: Bridge Networking.
overlay
The overlay driver builds a named subnet between multiple Swarm Mode nodes, which is a virtual network across hosts.
- in
swarm mode
How to establishoverlay
The network allows services to work correctly across hosts, please refer toDocker LabsNetwork-related coaching use cases:Overlay networking and service discovery. - If you want to dig deeper
overlay
Is how to complete the virtual network construction across the host and how the message flow, can refer toOverlay Driver Network Architecture.
host
或 none
Use the host network stack, or not use the network.
This is equivalent to the command-line arguments –net=host and –net= None.
These two drivers and network models can only be used in the Docker Stack. If you are using docker compose directives, use network_mode to specify them.
If you want to use a particular network on a common build, use [network] as mentioned in the second yaml file example.
With built-in network models such as host and None, there are a few syntax caveats: If you define an external network with a name like Host or None (note that you don’t need to actually create them, as both are part of the Docker built-in network model), then you’ll need to use Hostnet or Nonet when referencing them in the Compose orchestration file, as in:
version: "3.8"
services:
web:
networks:
hostnet: {}
networks:
hostnet:
external: true
name: host
---
services:
web:
.
build:
.
network: host
context: .
.
services:
web:
.
networks:
nonet: {}
networks:
nonet:
external: true
name: none
Copy the code
driver_opts
Specifies a set of options represented by a set of key-value pairs to pass to the network driver. They are driver-specific, so the specific parameters available should refer to the corresponding driver documentation.
driver_opts:
foo: "bar"
baz: 1
Copy the code
attachable
Since v3.2 +
Can only be used in driver: Overlay scenarios.
If set to true, standalone containers can also be attached to the network. If standalone container instances are attached to an overlay network, services in the container and individual container instances can communicate with each other. Note that you can even attach container instances from other Docker daemons to this Overlay network.
networks:
mynet1:
driver: overlay
attachable: true
Copy the code
enable_ipv6
Enable IPv6 on the network/subnet.
Not supported in V3 +.
Enable_ipv6 requires you to use v2 format and cannot be used in Swarm Mode.
ipam
User-defined IPAM configuration. Each sub-configuration is an optional parameter.
driver
: Customizes the IPAM driver without using the default valueconfig
: A list of one to more configuration blocks. Each configuration block has the following sub-parameters:subnet
: Subnet definition in CIDR format to delimit a network segment.
A complete example:
ipam:
driver: default
config:
- subnet: 172.28. 0. 0/ 16
Copy the code
NOTE: Additional IPAM such as gateway is only available in V2.
internal
By default, Docker is also connected to a bridge network to provide external connectivity. If you want to create an external overlay network, set this option to true.
labels
Use Docker Labels to add metadata to containers. It can be array format or dictionary format.
We recommend that you use reverse DNS tagging to prefix your metadata table keys with reverse domain names to avoid potential collisions with other applications’ table keys of the same name:
labels:
com.example.description: "Financial transaction network"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Financial transaction network"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
Copy the code
external
If set to true, the network is created and managed outside of the Compose orchestration file. Dockercompose Up will not attempt to create it and will return an error if the network does not exist.
For v3.3 and later formats, external cannot be used with driver, Driver_opts, IPam, internal, etc. This restriction was removed after V3.4 +.
In the following example, proxy is a gateway to the outside world, and Compose will look for an outside external network created by docker Network Create Outside. Instead of trying to automatically create a new network called [projectName]_outside:
version: "3.8"
services:
proxy:
build: ./proxy
networks:
- outside
- default
app:
build: ./app
networks:
- default
networks:
outside:
external: true
Copy the code
External. name is deprecated in V3.5 and later, please use name instead.
You can also specify a separate network name to be referenced in the Compose orchestration file.
name
Since v3.5
Set a custom name for the network. The value of the name can be used to resolve volumes with special character names. Note that this value is used as is, and the quotes are not ignored or prefixed with the stack name.
version: "3.8"
networks:
network1:
name: my-app-net
Copy the code
Name can be used with external:
version: "3.8"
networks:
network1:
external: true
name: my-app-net
Copy the code
Configuration item Formatting manual –configs
A top-level configS section declaration defines a configuration item or reference that can be authorized for use by an in-stack service. The source of the configuration item can be file or external.
file
: The contents of the configuration item are in a host file.external
: If the value is set totrue
“, indicating that the configuration item is already created. Docker will not attempt to build it, but will generate one if it doesn’t existconfig not found
Error.name
: The name of the configuration item in the Docker. The value of the name can be used to resolve volumes with special character names. Note that this value is used as is, and the quotes are not ignored or prefixed with the stack name.driver
anddriver_opts
: The name of a custom secret driver, and driver-specific options passed as key/value pairs. Introduced in version 3.8 file format, and only supported when usingdocker stack
.template_driver
: The name of the templating driver to use, which controls whether and how to evaluate the secret payload as a template. If no driver is set, no templating is used. The only driver currently supported isgolang
, which uses agolang
. Introduced in version 3.8 file format, and only supported when usingdocker stack
. Refer to use a templated config for a examples of templated configs.
In the following example, my_first_config is automatically created and named
_my_first_config when deployed as part of the stack, while my_second_config already exists.
configs:
my_first_config:
file: ./config_data
my_second_config:
external: true
Copy the code
The other variation is in the case of an external configuration item with a name definition, which can be referenced and used in Compose as redis_config:
configs:
my_first_config:
file: ./config_data
my_second_config:
external:
name: redis_config
Copy the code
You still need to declare configs sections within the stack for each service to gain access to configuration items, see Grant Access to the config.
Sensitive information item Formatting manual –secrets
The top-level secrets chapter declaration defines a sensitive information item, or its reference, that can be authorized for use by the stack service. The source of sensitive information items can be file or external.
file
The contents of sensitive information items are in a host file.external
: If the value is set totrue
, indicating that the sensitive information item is already created. Docker will not attempt to build it, but will generate one if it doesn’t existsecret not found
Error.name
: The name of the sensitive information item in the Docker. The value of the name can be used to resolve volumes with special character names. Note that this value is used as is, and the quotes are not ignored or prefixed with the stack name.template_driver
: The name of the templating driver to use, which controls whether and how to evaluate the secret payload as a template. If no driver is set, no templating is used. The only driver currently supported isgolang
, which uses agolang
. Introduced in version 3.8 file format, and only supported when usingdocker stack
.
In the following example, my_first_secret is automatically created and named
_my_first_secret when deployed as part of the stack, while my_second_secret already exists.
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
Copy the code
Another variation is in the case of an external configuration item with a name definition, which can be referenced and used in Compose under the name redis_secret.
Compose File V3.5 and later
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
name: redis_secret
Copy the code
Compose File V3.4 and later
my_second_secret:
external:
name: redis_secret
Copy the code
You still need to declare secret sections on the stack for each service to gain access to sensitive items, see Grant Access to the Secret.
Variable substitution
Environment variables can be used in the Compose orchestration file. When Docker-compose is running, compose extracts the variable values from the Shell environment variables. For example, if the operating system environment variable contains a definition of POSTGRES_VERSION=9.3, the following definition is used
db:
image: "postgres:${POSTGRES_VERSION}"
Copy the code
Is equivalent to
db:
image: "Postgres: 9.3"
Copy the code
If the environment variable does not exist or is an empty string, it is treated as an empty string.
You can use the.env file to set default values for environment variables. Compose will automatically look for the.env file in the current folder to get the value of the environment variable.
Env is only available in docker-compose up scenarios. It will not be used when docker stack is deployed.
Both syntax $VARIABLE and ${VARIABLE} are available. In addition, in v2.1 format, the following forms similar to Shell syntax can be used:
${VARIABLE:-default}
Will returndefault
, if the environment variableVARIABLE
Is an empty string or not set.${VARIABLE-default}
Will returndefault
, if the environment variableVARIABLE
If not set.
Similarly, the following syntax helps to specify an unambiguous value:
${VARIABLE:? err}
An error message will be generatederr
, if the environment variableVARIABLE
Empty or not set.${VARIABLE? err}
An error message will be generatederr
, if the environment variableVARIABLE
If not set.
Other Shell syntax features are not supported, such as ${VARIABLE/foo/bar}.
If you need a dollar sign, use? . At this time? No longer participates in the interpretation of environment variable substitution. The following cases:
web:
build: .
command: "? VAR_NOT_INTERPOLATED_BY_COMPOSE"
Copy the code
Compose will warn you if you forget this rule and use a single $character:
The VAR_NOT_INTERPOLATED_BY_COMPOSE is not set. Substituting an empty string.
Extension field
Since v3.4
By extending fields, choreography configuration fragments can be reused. They can be free-form, provided you define them at the top of the YAML document and their section names begin with x- :
version: '3.4'
x-custom:
items:
- a
- b
options:
max-size: '12m'
name: "custom"
Copy the code
NOTE
Starting with V3.7 (for the 3.x series), or starting with V2.4 (for the 2.x series), extended fields can also be placed at the first level below the top-level section of services, Volumes, Networks, Configuration items, and sensitive information items.
Something like this:
version: '3.7' services: redis: #... x-custom: items: - a - b options: max-size: '12m' name: "custom" Copy the code
By free-form, I mean that these definitions are not interpreted by Compose. However, when you insert references to them somewhere, they are expanded to the insertion point, and then the specific semantics are explained by Compose in conjunction with the context. This uses YAML anchors syntax.
For example, if you have multiple services that use the same logging option:
logging:
options:
max-size: '12m'
max-file: '5'
driver: json-file
Copy the code
You can define it like this:
x-logging:
&default-logging
options:
max-size: '12m'
max-file: '5'
driver: json-file
services:
web:
image: myapp/web:latest
logging: *default-logging
db:
image: mysql:latest
logging: *default-logging
Copy the code
With YAML Merge Type syntax, you can also override certain suboptions when inserting extended field definitions. Such as:
version: '3.4'
x-volumes:
&default-volume
driver: foobar-storage
services:
web:
image: myapp/web:latest
volumes: ["vol1", "vol2". "vol3"]
volumes:
vol1: *default-volume
vol2:
<< : *default-volume
name: volume02
vol3:
<< : *default-volume
driver: default
name: volume-local
Copy the code
Compose document Reference
- User guide
- Installing Compose
- Compose file versions and upgrading
- Get started with Docker
- Samples
- Command line reference
The end of the
- Original text: docs.docker.com/compose/com… .
- Translation: github.com/hedzr/docke…
- The translation | hedzr. Making. IO/enterprise/dock… .
🔚