Docker
-
Official document address :www.docker.com/get-started
-
Docker_practice.geitee. IO /zh-cn/
1. What is Docker
1.1 Official Definition
- Latest official website homepage
# 1. Official introduction
- We have a complete container solution for you - no matter who you are and where you are on your containerization journey.
-We offer you a complete container solution, no matter who you are, no matter where you are, you can start the container journey.-Official definition: Docker is a container technology.Copy the code
1.2 Origin of Docker
Docker is an internal project initiated by dotCloud founder Solomon Hykes while he was in France. It is based on dotCloud's innovation of cloud service technology for many years, and opened source under Apache 2.0 license in March 2013. The main project code is maintained on GitHub. The Docker project later joined the Linux Foundation and formed the Alliance for Advancing Open Containers (OCI). Docker's GitHub project has more than 57,000 stars and more than 10,000 forks. DotCloud even decided to change its name to Docker at the end of 2013 due to the popularity of the Docker project. Docker was originally developed and implemented on Ubuntu 12.04; Red Hat has supported Docker since RHEL 6.5. Google also uses Docker extensively in its PaaS products. Docker uses Go language launched by Google for development and implementation, and is based on cgroup, Namespace of Linux kernel and Union FS of OverlayFS class to encapsulate and isolate processes, which is a virtualization technology at the level of operating system. Since a quarantined process is independent of the host and other quarantined processes, it is also called a container.Copy the code
2. Why Docker
-
At development time, it works in the native test environment, but not in production
Here we take a Java Web application as an example. A Java Web application involves many things, such as JDK, Tomcat, mysql, and other software environments. When one of these versions is inconsistent, the application may not run. Docker packages the program and the environment used by the software directly together, ensuring that the environment is consistent regardless of the machine.
Advantage 1: Consistent operating environment, easier migration
-
The server's own program was hanged, and the result is that other people's program gave a problem to finish the memory, and their program was hanged because of insufficient memory
This is a more common case, if your application is not particularly high importance, the company basically can’t get your program to a server, then your server will with others sharing of a server, so will inevitably interference by other programs, lead to problems their own programs. Docker is a good solution to the problem of environment isolation, other programs do not affect their own programs.
Advantage 2: Encapsulates and isolates processes so that containers do not interfere with each other, making more efficient use of system resources
-
The company needs to deploy dozens more servers for an event that may have a lot of traffic coming in
In the absence of Docker, dozens of servers need to be deployed within a few days, which is a very painful thing for operation and maintenance. Moreover, the environment of each server is not necessarily the same, so various problems will occur, and finally the deployment will be numb. With Docker, I just need to package the application to the image, and I can run as many containers as you need, greatly improving the deployment efficiency.
Advantage 3: Mirroring N Consistent containers in multiple environments
3. Difference between Docker and VIRTUAL machine
As for the difference between Docker and virtual machine, I found a picture on the Internet, which is very intuitive and vividly displayed. Without further ado, I will directly show the picture above.
Comparing the two figures above, we can see that the virtual machine carries the operating system, and the application, which is small, becomes very large and cumbersome because of the operating system. Docker does not carry an operating system, so the application of Docker is very light. In addition, when calling host CPU, disk and other resources, take memory for example, virtual machine is using Hypervisor to virtualize memory, the whole call process is virtual memory -> virtual physical memory -> real physical memory, but Docker is using Docker Engine to call host resources, In this case the process is virtual memory -> real physical memory.
Traditional VMS | Docker container | |
---|---|---|
Disk usage | From a few gigabytes to dozens of gigabytes | Tens of MB to hundreds of MB |
CPU Memory usage | Virtual operating systems are CPU and memory hogs | Docker engine usage is extremely low |
startup | A few minutes (from startup to project) | Seconds (from opening the container to running the project) |
Installation management | Special operation and maintenance skills are required | Easy to install and manage |
Application deployment | Each deployment takes time and effort | Starting with the second deployment is easy and straightforward |
coupling | When multiple application services are installed together, they easily affect each other | Each application serves one container to achieve isolation |
System is dependent on | There is no | For a kernel with the same or similar requirements, Linux is currently recommended |
4. The installation of Docker
4.1 installation docker (centos7. X)
- Uninstall the original Docker
$ sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
Copy the code
- Install docker dependencies
$ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
Copy the code
- Set the yum source of the Docker
$ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Copy the code
- Install the latest version of Docker
$ sudo yum install docker-ce docker-ce-cli containerd.io
Copy the code
- Install Docker for specified version
$ yum list docker-ce --showduplicates | sort -r $ sudo yum install docker-ce-<VERSION_STRING> IO $sudo yum install docker-ce-18.09.5-3.el7 docker-ce-cli-18.09.5-3.el7 containerd.ioCopy the code
- Start the docker
$ sudo systemctl enable docker
$ sudo systemctl start docker
Copy the code
- Close the docker
$ sudo systemctl stop docker
Copy the code
- Test the Docker installation
$ sudo docker run hello-world
Copy the code
4.2 Bash Installation (Universal for all Platforms)
- In the test or development environment, Docker official provides a set of convenient installation scripts to simplify the installation process. The CentOS system can use this script to install, and you can pass the
--mirror
Option to install using a domestic source: After executing this command, the script automatically prepares everything and installs a stable version of Docker on the system.
$ curl -fsSL get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh --mirror Aliyun
Copy the code
- Start the docker
$ sudo systemctl enable docker
$ sudo systemctl start docker
Copy the code
- Create a Docker user group
$ sudo groupadd docker
Copy the code
- Add the current user to the Docker group
$ sudo usermod -aG docker $USER
Copy the code
- Test whether docker is installed correctly
$ docker run hello-world
Copy the code
5. Core architecture of Docker
Mirror:
An image represents an application environment. It is a read-only file, such as mysql image, Tomcat image,nginx image, etcContainer:
Each time an image is run, it creates a container, which is a running image and is readable and writableWarehouse:
The location used to store images, similar to maven’s repository, is also where images are downloaded and uploadeddockerFile:
Docker generates the image configuration file, which is used to write some configurations of the custom imagetar:
A file packed with images that can be restored to the image at a later time
6. Docker is configured with Ali Image acceleration service
6.1 Docker Operation Process
6.2 Docker is configured with Aliyun image acceleration
Visit Ali Cloud to log in your account to view the Docker image acceleration service
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://lz2nib3q.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
Copy the code
Verify that Docker's image acceleration works
[root@localhost ~]# docker info
..........
127.0.0.0/8
Registry Mirrors:
'https://lz2nib3q.mirror.aliyuncs.com/'
Live Restore Enabled: false
Product License: Community Engine
Copy the code
7. Getting started with Docker
7.1 Docker’s first program
docker run hello-world
[root@localhost ~]# docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
Copy the code
8. Common commands
6.1 Auxiliary Commands
# 1. Install Completion auxiliary command
Docker version -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- check docker information docker info -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - to see more detailed information docker -- help -------------------------- Help CommandCopy the code
6.2 Images Image command
# 1. View all mirrors on this machine
Docker images -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - list all mirror - a list all the local image (including intermediate image layer) - q show only the image id
# 2. Search for images
Docker search [options] image name ------------------- Go to dockerhub to query the current image. -s specifies the value. Lists the images whose collections are at least the specified value
# 3. Download the image from the repository
Docker pull mirror name [: TAG | @ DIGEST] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- download mirror
# 4. Delete the mirror
Docker rmi mirror name -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - remove the mirror - f mandatory deleteCopy the code
6.3 Contrainer Container command
# 1. Run the container
Docker run Image name Image name Create and start a container --name Alias give a name to the container -d Start daemon container (start container in the background) -p Mapped port number: Original port number: Start specified port numberDocker run --name myTomcat -p 8888:8080 tomcat# 2. View the running container
Docker ps lists all running containers -a running and historically running containers -q silent mode, showing only container numbers
# 3. Stop close | | restart container
Docker start The container name or id --------------- Start the container docker restart The container name or ID --------------- Restart the container docker stop the container name or ID -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- normal stop container run docker kill container name or container id -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- to immediately stop container operation
# 4. Delete the container
Docker rm -f container id and container name docker rm -f $(docker ps - aq) -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - delete all containers
# 5. View the processes in the container
Docker top Container ID or container name ------------------ View the processes in the container
Look inside the container for details
Docker inspect Container ID ------------------ View details inside the container
# 7. View the run logs of the container
Docker logs [OPTIONS] Container ID or container name ------------------ View container logs -t add timestamp -f Follow the latest log print --tail N(number of logs) number displays the last number of logs
# 8. Get inside the container
Run the -i command to run the container in interactive mode, usually with -t to allocate a pseudo-terminal shell window bash
# 9. Copy files between containers and hosts
Docker cp file directory | container id: container path -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- will host machine is copied to the container internal docker cp container id: container resource path Host machine directory path -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- copy the container resources to the host
# 10. Data volume (Volum) implementation shares directory with host machine
Docker run - v hosting any alias: | path/vessel is the path to the image of note: 1. If the host path must be an absolute path, the host directory overwrites the contents of the container directory 2. If it is an alias, it automatically creates a directory on the host when docker runs the container and copies the container directory files to the host
# 11. Pack the image
Docker save Image name -o name.tar
# 12. Load the image
Docker load -i name. Tar
# 13. The container is packaged as a new image
Docker commit -m "Description" -a "author" (container ID or name) Name of the packaged image: labelCopy the code
7. The principle of Docker’s mirroring
7.1 What is a Mirror?
An image is a lightweight, executable, standalone package used to package a software runtime environment and software developed based on the runtime environment. It contains everything needed to run a piece of software, including code, runtime libraries, environment variables, and configuration files.
7.2 Why is a Mirror Image So large?
A mirror image is a scroll
-
UnionFS:
The Union file system is a layered, lightweight, and high-performance file system that allows changes to the file system to be layered on top of each other as a single commit, while mounting different directories to the same virtual file system. The Union file system is the basis for Docker images. The file system feature is that multiple file systems are loaded at the same time, but only one file system can be seen from the outside. Joint loading adds all layers of file systems together, so that the final file system contains all the underlying files and directories.
7.3 Docker Mirroring Principle
Docker's image is actually made up of layer upon layer of file systems.
-
Boot File System (BOOTFS) mainly contains the bootloader and kernel. Bootloader mainly loads the kernel. When Linux starts, the bootfs file system is loaded. At the bottom of the Docker image is bootfs. This layer is the same as Linux/Unix systems and contains the bootloader and kernel. After boot is loaded, the entire kernel is in memory. At this time, the use of memory has been transferred from bootfs to the kernel. At this time, bootFS will be uninstalled.
-
Rootfs (root file system), on top of bootfs, contains standard directories and files such as /dev, /proc, /bin, and /etc in typical Linux systems. Rootfs is a variety of operating system distributions, such as Ubuntu/CentOS, etc.
-
We usually install centos into virtual machines with 1 to several GB, why docker is only 200MB? For a lean OS, rootfs can be small and only need to include the most basic commands, tools, and libraries, since the underlying layer uses Kernal of Host directly and only needs to provide rootfs itself. This shows that Linux distributions have the same bootfs and different rootfs. So different distributions can share bootfs.
7.4 Why does docker image adopt this hierarchical structure?
One of the biggest benefits is resource sharing
- For example, if multiple images are built from the same base image, the host only needs to save one base image on disk. At the same time, only one base image needs to be loaded in memory to serve all containers. And each layer of the mirror can be shared. Docker images are read-only. When the container starts, a new writable layer is loaded onto the top of the image. This layer is often called the container layer, and everything below the container layer is called the mirror layer.
8.Docker installs common services
Install mysql 8.1
# 1. Pull mysql image to local
Docker pull mysql:tag # 2 -e is the environment variable specified inside the mysql image. You must specify docker run --name mysql -e MYSQL_ROOT_PASSWORD=root -d mysql:tag -- no external port exposed external connection failed docker run --name mysql -e MYSQL_ROOT_PASSWORD=root -p 3306:3306 -d mysql:tag -- No external port exposed
# 3. Enter the mysql container
Docker exec - it container name | container id bash
# 4. Externally check mysql logs
Docker logs container name | container id
# 5. Use custom configuration parameters
docker run --name mysql -v /root/mysql/conf.d:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=root -d mysql:tag
# 6. Mount the container data location with the host location to ensure data security
docker run --name mysql -v /root/mysql/data:/var/lib/mysql -v /root/mysql/conf.d:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=root -p 3306:3306 -d mysql:tag
# 7. By other client access As in the window system | macos systems use the client tools to access
# 8. To the mysql database backup for the SQL file docker exec mysql | container id sh -c 'exec mysqldump -- all databases - uroot -p "$MYSQL_ROOT_PASSWORD"' > Docker exec mysql sh -c 'exec mysqldump --databases database -uroot -p"$MYSQL_ROOT_PASSWORD"' > /root/all-databases. SQL --database database docker exec mysql sh -c 'exec mysqldump --no-data --databases database table -uroot -p"$MYSQL_ROOT_PASSWORD"' > /root/all-databases
# 9. Execute SQL file into mysql
docker exec -i mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /root/xxx.sql
Copy the code
8.2 Installing Redis Service
# 1. Search for Redis images in Docker Hub
docker search redis
# 2. Pull the Redis image locally
docker pull redis
# 3. Start the Redis service run container
Docker run --name redis -d redis:tag docker run --name redis -p 6379:6379 -d redis:tag
# 4. View the startup log
Docker logs -t -f container id | vessel name
# 5. Look inside the container
Docker exec - it container id | name bash/redis - cli to redis - cli start
# 6. Load the external custom configuration to start the Redis container
By default redis official image. There is no redis conf configuration file Need to go to the website to download the specified version of the configuration file is 1. The wget official http://download.redis.io/releases/redis-5.0.8.tar.gz to download the installation package 2. Copy the configuration file in the official installation package to a specified directory on the host, for example, /root/redis/redis. Bind 0.0.0.0 Enable remote permission appenonly yes Enable AOF persistence 4. Loading configuration start docker run - the name of redis - v/root/redis: / usr/local/etc/redis - p, 6379:6379 - d redis redis server. - /usr/local/etc/redis/redis.conf
# 7. Mount the data directory locally to ensure data security
docker run --name redis -v /root/redis/data:/data -v /root/redis/redis.conf:/usr/local/etc/redis/redis.conf -p 6379:6379 -d redis redis-server Starts the redis-server command inside the containerCopy the code
8.3 install Nginx
# 1. Search for nginx in Docker Hub
docker search nginx
# 2. Pull nginx image to local
[root@localhost ~]# docker pull nginx Using default tag: latest latest: Pulling from library/nginx afb6ec6fdc1c: Pull complete b90c53a0b692: Pull complete 11fa52a0fdc0: Pull complete Digest: sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097 Status: Downloaded newer image for nginx:latest docker.io/library/nginx:latest
# 3. Start nginx container
docker run -p 80:80 --name nginx01 -d nginx
# 4. Enter the container
Docker exec it nginx01 /bin/bash Find whereis nginx configuration file: /etc/nginx/nginx.conf
# 5. Copy the configuration file to the host
Docker cp nginx01 (container id | vessel name) : / etc/nginx/nginx. Conf host machine list
# 6. Hang in the Nginx configuration as well as HTML outside the host
docker run --name nginx02 -v /root/nginx/nginx.conf:/etc/nginx/nginx.conf -v /root/nginx/html:/usr/share/nginx/html -p 80:80 -d nginx
Copy the code
Install Tomcat 8.4
# 1. Search for Tomcat in Docker Hub
docker search tomcat
# 2. Download tomcat image
docker pull tomcat
# 3. Run tomcat image
docker run -p 8080:8080 -d --name mytomcat tomcat
# 4. Enter the Tomcat container
docker exec -it mytomcat /bin/bash
Mount the Webapps directory externally
docker run -p 8080:8080 -v /root/webapps:/usr/local/tomcat/webapps -d --name mytomcat tomcat
Copy the code
8.5 Installing the MongoDB Database
# 1. Run mongDB
Docker run -d -p 27017:27017 --name mymongo mongo -- docker logs -f mymongo --
# 2. Enter the mongodb container
Docker exec it mymongo /bin/bash Run the mongo command directly
# 3. Common containers with permissions
docker run --name mymongo -p 27017:27017 -d mongo --auth
# 4. Enter the container and configure the username and password
Mongo use admin select db. CreateUser ({user:"root", PWD :"root",roles:[{role:'root',db:'admin'}]}) // Create a user. If the user is successfully created, the user must authenticate exit for subsequent operations
# 5. Map the mongoDB data directory to the host
docker run -d -p 27017:27017 -v /root/mongo/data:/data/db --name mymongo mongo
Copy the code
8.6 installation ElasticSearch
Note:
Increase the JVM thread limit
0. Pull an image and run ElasticSearch
# 1. Dockerhub pulls images
Docker Pull ElasticSearch :6.4.2 # 2. Docker run -p 9200:9200 -p 9300:9300 ElasticSearch :6.4.2Copy the code
- The following error occurred during startup
1. Perform pre-configuration
In the centos VM, modify the sysctl.conf configuration
Vim /etc/sysctl.conf # 2. Add vm max_map_count=262144 # 3. Note: This step is to prevent the following error when the container is started: bootstrap checks failed max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
Copy the code
2. Start EleasticSearch
# 0. Copy the data directory in the container to the host
Docker cp container id: / usr/share/share/elasticsearch/data/root/es # 1. Docker run -d --name ES -p 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS=" -xMS128m -XMx128m "-v docker run -d --name ES -P 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS=" -xMS128m -XMx128m" -v / root/es/plugins: / usr/share/elasticsearch/plugins - v/root/es/data: / usr/share/elasticsearch/data elasticsearch: 6.4.2Copy the code
3. Install IK word dividers
# 1. Download the corresponding version of the IK word divider
Wget HTTP: / / https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.4.2/elasticsearch-analysis-ik-6.4.2.zip
# 2. Unzip into the plugins folder
Yum install -y unzip unzip -d ik ElasticSearch - analysis-IK-6.4.2.zip
# 3. Add custom extension words and stop words
CD plugins/elasticsearch/config vim IKAnalyzer. CFG. XML < properties > < comment > IK Analyzer extension configuration < / comment > <! -- Users can configure their own extended dictionary --> <entry key="ext_dict">ext_dict.dic</entry> <! --> <entry key="ext_stopwords">ext_stopwords. Dic </entry> </properties>
# 4. Create ext in config directory under ik tokenizerThe _dict.dic file must be encoded in UTF-8 to take effect vim ext_Dict. Dic adds extensions
# 5. Create ext in config directory under ik toggle_stopword.dic file vim ext_Dic add the words stopwords
# 6. Restart the container to take effect
Docker commit -a="xiaochen" -m="es with IKAnalyzer" container ID Xiaochen/elasticsearch: 6.4.2Copy the code
4. Install Kibana
# 1. Download the Kibana image locally
Docker pull kibana: 6.4.2
# 2. Start the Kibana container
Docker run -d --name kibana -e ELASTICSEARCH_URL=http://10.15.0.3:9200 -p 5601:5601 kibana:6.4.2Copy the code
10. The following error solution appears in Docker
[root@localhost ~]# docker search mysql or docker pull cannot be usedError response from daemon: Get https://index.docker.io/v1/search? q=mysql&n=25: x509: certificate has expired or is not yet valid
Copy the code
- Note: The reason for this error is that the system time is inconsistent with the Docker Hub time, so it is necessary to synchronize the system time with the network time
# 1. Install time synchronization
sudo yum -y install ntp ntpdate # 2. Sudo ntpdate cn.pool.ntp.org # 3 4. Test againCopy the code
9.Dockerfile
9.1 What is a Dockerfile
Dockerfile can be considered as a description file of a Docker image, which is a script composed of a series of commands and parameters. The main function is used to build docker image build file.
- You can see from the architecture diagram that images can be built directly from DockerFile
9.2 Dockerfile Parsing Procedure
9.3 Dockerfile Retention Commands
The official explanation: docs.docker.com/engine/refe…
Reserved words | role |
---|---|
FROM | Which mirror is the current mirror based on The first instruction must be FROM |
MAINTAINER | The name and email address of the mirror maintainer |
RUN | Instructions to run when building the image |
EXPOSE | The port number exposed by the current container |
WORKDIR | Specifies the working directory that the terminal logs in to by default after creating the container, a landing point |
ENV | Used to set environment variables during the image build process |
ADD | Copy files from the host directory into the image and the ADD command automatically processes the URL and decompresses the tar package |
COPY | Similar to ADD, copy files and directories to the image Copies files/directories from the < original path > directory in the build context directory to the < destination path > location in the image in the new layer |
VOLUME | Container data volumes for data preservation and persistence |
CMD | Specifies the command to run when a container is started There can be multiple CMD directives in a Dockerfile, but only the last one takes effect and CMD is replaced by the argument after the Docker run |
ENTRYPOINT | Specifies the command to run when a container is started The purpose of ENTRYPOINT, like CMD, is to specify the container launcher and its parameters |
9.3.1 the FROM the command
-
To build a new image based on that image, the base image automatically pulled from the Docker Hub during the build must appear as the first instruction in the Dockerfile
-
Grammar:
FROM <image> FROM<image>[:<tag>] Indicates the latest versionFROM<image>[@<digest>] uses the digestCopy the code
9.3.2 MAINTAINER command
-
Image maintainer’s name and email address [deprecated]
-
Grammar:
MAINTAINER <name> Copy the code
9.3.3 the RUN command
-
The RUN directive will execute any commands in a new layer above the current image and submit the results. The generated commit image will be used for the next step in the Dockerfile
-
Grammar:
RUN <command> (shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows) RUN echo hello RUN ["executable"."param1"."param2"] (exec form) RUN ["/bin/bash"."-c"."echo hello"] Copy the code
9.3.4 EXPOSE command
-
Used to specify the ports that the built image is exposed to when running as a container
-
Grammar:
EXPOSE 80/ TCP If no display is specified, the default exposure is TCPEXPOSE 80/udp Copy the code
9.3.5 CMD command
-
The command used to specify execution for the started container can have only one CMD directive in the Dockerfile. If multiple commands are listed, only the last command will take effect.
-
Note: There can only be one CMD directive in a Dockerfile. If multiple commands are listed, only the last command will take effect.
-
Grammar:
CMD ["executable"."param1"."param2"] (exec form, this is the preferred form) CMD ["param1"."param2"] (as default parameters to ENTRYPOINT) CMD command param1 param2 (shell form) Copy the code
9.3.6 shall WORKDIR command
-
Used to set working directories for any RUN, CMD, ENTRYPOINT, COPY, and ADD directives in Dockerfile. If WORKDIR does not exist, it will be created even if it is not used in any subsequent Dockerfile directive.
-
Grammar:
WORKDIR /path/to/workdir WORKDIR /a WORKDIR b WORKDIR c` note:WORKDIRDirectives can be used multiple times in a Dockerfile. If a relative path is provided, it will be relative to the path of the previous WORKDIR directive Copy the code
9.3.7 ENV command
-
Used to set environment variables for the build image. This value will appear in the context of all subsequent instructions during the build phase.
-
Grammar:
ENV <key> <value> ENV <key>=<value> ... Copy the code
9.3.8 ADD command
-
Use to copy a new file, directory, or remote file URL from the context context and add them to the image file system at the specified path.
-
Grammar:
ADDHom * /mydir/ wildcard adds multiple files ADDhom? .txt /mydir/ wildcard added ADDTest. TXT relativeDir/ You can specify relative paths ADDTest. TXT /absoluteDir/ can also specify an absolute path ADD url Copy the code
9.3.9 COPY command
-
To copy the specified file from the context directory to the specified directory of the image
-
Grammar:
COPY src dest COPY ["<src>"."<dest>"] Copy the code
9.3.10 VOLUME command
-
Used to define the directory in which the container can be mounted to the host at runtime
-
Grammar:
VOLUME ["/data"] Copy the code
9.3.11 ENTRYPOINT command
-
CMD is used to specify the command to be executed when the container is started
-
Grammar:
["executable"."param1"."param2"] ENTRYPOINT command param1 param2 Copy the code
The ENTRYPOINT directive is often used to set the first command after a container is started, which is often fixed to a container. The CMD directive is often used to set the default parameters of the first command that the container launches, which can vary for a container.
9.3.11 ENTRYPOINT command
9.4 Dockerfile Construction Springboot project deployment
1. Prepare springBoot executable items
2. Add the running project to the Linux VM
3. Write Dockerfile
FROM openjdk:8
WORKDIR /ems
ADD ems.jar /ems
EXPOSE 8989
ENTRYPOINT ["java"."-jar"]
CMD ["ems.jar"]
Copy the code
4. Build an image
[root@localhost ems]# docker build -t ems .
Copy the code
5. Run the image
[root@localhost ems]# docker run -p 8989:8989 ems
Copy the code
6. Visit projects
http://10.15.0.8:8989/ems/login.html
Copy the code
10. Advanced network configuration
10.1 illustrates
When Docker is started, it automatically creates a Docker0 virtual bridge on the host, which is actually a Linux bridge and can be understood as a software switch. It forwards between ports mounted to it.
At the same time, Docker randomly assigns an address in a private network segment (defined in RFC1918) that is not used locally to the docker0 interface. For example, a typical 172.17.42.1 is 255.255.0.0. The network ports in the container that are started later are automatically assigned an address on the same network segment (172.17.0.0/16).
When a Docker container is created, a pair of Veth pair interfaces are created (when a packet is sent to one interface, the other interface can receive the same packet). One end of the pair is in the container, called eth0; The other end is local and mounted to the Docker0 bridge with a name starting with veth (for example, vethAQI2QT). In this way, hosts can communicate with containers and containers can communicate with each other. Docker creates a virtual shared network between hosts and all containers.
10.2 Viewing Network Information
# docker network ls
Copy the code
10.3 Creating a Bridge
# docker network create -d bridge name
Copy the code
10.4 Deleting a Bridge
# docker network Rm bridge name
Copy the code
10.5 Network Communication is used before containers
# 1. Query current network configuration
- docker network ls
Copy the code
NETWORK ID NAME DRIVER SCOPE
8e424e5936b7 bridge bridge local
17d974db02da docker_gwbridge bridge local
d6c326e433f7 host host local
Copy the code
# 2. Create a bridge network
- docker network create -d bridge info
Copy the code
[root@centos ~]# docker network create -d bridge info
6e4aaebff79b1df43a064e0e8fdab08f52d64ce34db78dd5184ce7aaaf550a2f
[root@centos ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
8e424e5936b7 bridge bridge local
17d974db02da docker_gwbridge bridge local
d6c326e433f7 host host local
6e4aaebff79b info bridge local
Copy the code
# 3. The start container specifies the use of a bridge
- docker run -d -p 8890:80 --name nginx001 --network info nginx
- docker run -d -p 8891:80 --name nginx002 --network info nginx
Note: once the bridge is specified --name specifies the name of the host. When multiple containers are specified on the same bridge, the host name can be used in any container to communicate with the containerCopy the code
[root@centos ~]# docker run -d -p 8890:80 --name nginx001 --network info nginx c315bcc94e9ddaa36eb6c6f16ca51592b1ac8bf1ecfe9d8f01d892f3f10825fe [root@centos ~]# docker run -d -p 8891:80 --name nginx002 --network info nginx f8682db35dd7fb4395f90edb38df7cad71bbfaba71b6a4c6e2a3a525cb73c2a5 [root@centos ~]# docker Ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f8682DB35DD7 nginx "/ docker-entryPoint...." 3 seconds ago Up 2 seconds 0.0.0.0:8891->80/ TCP nginx002 c315bcc94e9d nginx "/docker-entrypoint...." 7 minutes ago Up 7 minutes 0.0.0.0:8890->80/ TCP nginx001 b63169d43792 mysql:5.7.19 "docker-entrypoint.s..." 7 minutes ago Up 7 minutes 3306/tcp mysql_mysql.1.s75qe5kkpwwttyf0wrjvd2cda [root@centos ~]# docker exec -it f8682db35dd7 /bin/bash root@f8682db35dd7:/# curl http://nginx001 <! DOCTYPE html> <html> <head> <title>Welcome to nginx! </title> .....Copy the code
11. Configure advanced data volumes
11.1 illustrates
A data volume is a special directory that can be used by one or more containers, bypassing UFS and providing a number of useful features:
Data volume
It can be shared and reused between containers- right
Data volume
The changes will take effect immediately - right
Data volume
The update does not affect the mirror Data volume
The default persists even if the container is deleted
Note: Using a data volume is similar to mounting a directory or file under Linux. Files in the directory specified as the mount point in the image are copied to the data volume (only when the data volume is empty).
11.2 Creating a Data Volume
[root@centos ~]# docker volume create my-vol
my-vol
Copy the code
11.3 Viewing Data Volumes
[root@centos ~]# docker volume inspect my-vol
[
{
"CreatedAt": "2020-11-25T11:43:56+08:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-vol/_data",
"Name": "my-vol",
"Options": {},
"Scope": "local"
}
]
Copy the code
11.4 Mounting A Data Volume
[root@centos ~]# docker run -d -P --name web -v my-vol:/usr/share/nginx/html nginx
[root@centos ~]# docker inspect web
"Mounts": [
{
"Type": "volume",
"Name": "my-vol",
"Source": "/var/lib/docker/volumes/my-vol/_data",
"Destination": "/usr/share/nginx/html",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
Copy the code
11.5 Deleting a Data Volume
docker volume rm my-vol
Copy the code
12.Docker Compose
12.1 introduction
The Compose project is Docker’s official open source project, which is responsible for the rapid choreography of Docker container clusters. Functionally, it is very similar to Heat in OpenStack.
The code is currently available at github.com/docker/comp… On open source.
Compose is positioned as “Defining and running multi-container Docker applications”. Its predecessor is Fig, an open source project.
From part 1, we learned that using a Dockerfile template file makes it easy for users to define a separate application container. However, in daily work, it is common to encounter situations where multiple containers need to work together to complete a task. For example, to implement a Web project, in addition to the Web service container itself, there is often a database service container on the back end, and even a load balancing container.
Compose fits that need. It allows users to define a set of associated application containers as a project through a single docker-comemage. yml template file (YAML format).
Compose has two important concepts:
- Service (
service
) : An application container can actually contain several instances of containers running the same image. - Project (
project
) : A complete business unit consisting of a set of associated application containers, indocker-compose.yml
File defined.
Compose’s default management object is a project, which provides easy lifecycle management through subcommands for a set of containers in a project.
The Compose project is written in Python and its implementation calls the API provided by the Docker service to manage the container. Therefore, you can leverage Compose for orchestration management on any platform you operate on that supports the Docker API.
12.2 Installation and Uninstallation
1.linux
- Installing on Linux is as simple as downloading the compiled binaries directly from the official GitHub Release. For example, download the corresponding binary package directly on a Linux 64-bit system.
$sudo curl -l https://github.com/docker/compose/releases/download/1.25.5/docker-compose- ` ` uname - s - ` uname -m ` > / usr /local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
Copy the code
2. Macos, window
- Compose can be installed via Python’s package-management tool PIP, downloaded directly from compiled binaries, or even run directly from a Docker container.
Docker Desktop for Mac/Windows comes with docker-compose binary, you can use Docker directly after installing Docker
.
3. Complete the bash command
$The curl - L > https://raw.githubusercontent.com/docker/compose/1.25.5/contrib/completion/bash/docker-compose /etc/bash_completion.d/docker-compose
Copy the code
4. Remove
- If the installation is in binary package mode, delete the binary file.
$ sudo rm /usr/local/bin/docker-compose
Copy the code
5. The installation is successful
$ docker-compose --versionDocker-compose version 1.25.5, Build 4667896bCopy the code
12.3 Docker Compose Usage
# 1. Related concepts
Copy the code
Let’s start with a few terms.
- Service (
service
) : an application container that can actually run multiple instances of the same image. - Project (
project
) : A complete business unit consisting of a set of associated application containers. ∂ A project can be associated with multiple services (containers).Compose
Project oriented management.
# 2. Scenario
Copy the code
The most common project is a Web site, which should contain a Web application and cache.
- Springboot application
- The mysql service
- Redis service
- Elasticsearch service
- .
# 3. The docker - compose template
-Refer to https://docker_practice.gitee.io/zh-cn/compose/compose_file.html
Copy the code
version: "3.0"
services:
mysqldb:
image: Mysql: 5.7.19
container_name: mysql
ports:
- "3306:3306"
volumes:
- /root/mysql/conf:/etc/mysql/conf.d
- /root/mysql/logs:/logs
- /root/mysql/data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
networks:
- ems
depends_on:
- redis
redis:
image: Redis: 4.0.14
container_name: redis
ports:
- "6379:6379"
networks:
- ems
volumes:
- /root/redis/data:/data
command: redis-server
networks:
ems:
Copy the code
# 4. Run a set of containers via Docker-compose
-Refer to https://docker_practice.gitee.io/zh-cn/compose/commands.html
Copy the code
[root@centos ~]Docker-compose up // the foreground starts a group of services
[root@centos ~]Docker-compose up -d // start a group of services in the background
Copy the code
12.4 Docker-compose template file
The template file is the core of Compose and involves a lot of directive keywords. But don’t worry, most of these commands have similar meanings to the docker Run parameters.
The default template file name is docker-comemage. yml in YAML format.
version: "3"
services:
webapp:
image: examples/web
ports:
- "80:80"
volumes:
- "/data"
Copy the code
Note that each service must automatically build the generated image by specifying the image via the image directive or the build directive (requiring a Dockerfile) etc.
If you use the build directive, the options set in the Dockerfile (for example: CMD, EXPOSE, VOLUME, ENV, etc.) will be automatically retrieved without having to repeat the Settings in docker-comemater.yml.
The following describes the usage of each command.
build
Specify the path to the Dockerfile folder (either absolute or relative to the docker-comedy.yml file). Compose will use it to automatically build the image and then use it.
version: '3'
services:
webapp:
build: ./dir
Copy the code
You can also specify the path of the Dockerfile folder using the context directive.
Use the dockerfile directive to specify the dockerfile file name.
Use the ARG directive to specify variables when building the image.
version: '3'
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate
args:
buildno: 1
Copy the code
command
Overrides the commands executed by default when the container is started.
command: echo "hello world"
Copy the code
container_name
Specify the container name. The default format will be project name _ service name _ serial number.
container_name: docker-web-container
Copy the code
Note: When you specify a container name, the service will not be able to scale because Docker does not allow multiple containers to have the same name.
depends_on
Resolve container dependencies and startup priorities. In the following example, Redis DB is started before Web is started
version: '3'
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
Copy the code
Note: Web services do not wait for Redis DB to be “fully started” before starting.
env_file
Gets environment variables from a file, either as a separate file path or as a list.
If you specify the compose template FILE using docker-comement-f FILE, the path of the variable in env_file will be based on the path of the template FILE.
If a variable name conflicts with the environment directive, the convention is that the latter prevails.
env_file: .env
env_file:
- ./common.env
- ./apps/web.env
- /opt/secrets.env
Copy the code
Each line in the environment variable file must be formatted to support a comment line starting with #.
# common.env: Set development environment
PROG_ENV=development
Copy the code
environment
Set environment variables. You can use either an array or a dictionary.
Variables given a name only automatically get the value of the corresponding variable on the Compose host, which can be used to prevent unnecessary data leakage.
environment:
RACK_ENV: development
SESSION_SECRET:
environment:
- RACK_ENV=development
- SESSION_SECRET
Copy the code
If the variable name or value used in the true | false, yes | no Boolean expression meaning of vocabulary, such as the best on the quotes, avoid YAML automatic parsing some content for the corresponding Boolean semantics. These specific words include
y|Y|yes|Yes|YES|n|N|no|No|NO|true|True|TRUE|false|False|FALSE|on|On|ON|off|Off|OFF
Copy the code
healthcheck
Run commands to check whether the container is healthy.
healthcheck:
test: ["CMD"."curl"."-f"."http://localhost"]
interval: 1m30s
timeout: 10s
retries: 3
Copy the code
image
Specifies the image name or image ID. Compose will attempt to pull the image if it does not already exist locally.
image: ubuntu
image: orchardup/postgresql
image: a4bc65fd
Copy the code
networks
Configure the network connected to the container.
version: "3"
services:
some-service:
networks:
- some-network
- other-network
networks:
some-network:
other-network:
Copy the code
ports
Expose port information.
Use the HOST port :CONTAINER format, or just specify the port of the CONTAINER (the HOST will randomly select the port).
ports:
- "3000"
- "8000:8000"
- "49100:22"
- "127.0.0.1:8001:8001"
Copy the code
Note: When usedHOST:CONTAINER
Format to map ports, if you use container ports less than 60 and don’t put them in quotes, you might get wrong results becauseYAML
It automatically resolvesxx:yy
The number format is base 60. To avoid this problem, it is recommended that numeric strings be quoted.
sysctls
Configure container kernel parameters.
sysctls:
net.core.somaxconn: 1024
net.ipv4.tcp_syncookies: 0
sysctls:
- net.core.somaxconn=1024
- net.ipv4.tcp_syncookies=0
Copy the code
ulimits
Specifies the ulimits value for the container.
For example, specify the maximum number of processes to be 65535, the number of file handles to be 20000 (soft limit, the application can change at any time, cannot exceed the hard limit) and 40000 (system hard limit, can only be increased by root user).
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
Copy the code
volumes
Set the path for attaching the data volume. You can set it to the HOST path (HOST:CONTAINER) or the VOLUME name (VOLUME:CONTAINER). You can also set the access mode (HOST:CONTAINER :ro).
Paths in this directive support relative paths.
volumes:
- /var/lib/mysql
- cache/:/tmp/cache
- ~/configs:/etc/configs/:ro
Copy the code
If the path is the data volume name, you must configure the data volume in the file.
version: "3"
services:
my_src:
image: Mysql: 8.0
volumes:
- mysql_data:/var/lib/mysql
volumes:
mysql_data:
Copy the code
12.5 Docker-compose Common command
1. Command object and format
For Compose, the object of most commands can be either the project itself, or specified as a service or container in the project. If not specified, the command object will be a project, which means that all services in the project will be affected by the command.
Docker-compose [COMMAND] –help or docker-compose help [COMMAND] to view the usage format of a particular COMMAND.
The basic usage format of the docker-compose command is
docker-compose [-f=<arg>...] [options] [COMMAND] [ARGS...]Copy the code
2. Command options
-f, --file FILE
Specifies the Compose template file to use, which defaults todocker-compose.yml
Can be specified multiple times.-p, --project-name NAME
Specify a project name. By default, the name of the directory you are in is used as the project name.--x-networking
Use Docker’s pluggable network backend feature--x-network-driver DRIVER
Specifies the network backend driver. The default isbridge
--verbose
Output more debugging information.-v, --version
Print the version and exit.
3. Command usage instructions
up
Docker-compose up [options] [SERVICE…] .
-
This is a powerful command that attempts to automate a series of operations including building an image, (re) creating a service, starting a service, and associating a service-related container.
-
Linked services will be automatically started unless they are already running.
-
Most of the time, you can start a project directly with this command.
-
By default, docker-compose up starts all containers in the foreground, and the console will print the output of all containers at the same time, making it easy to debug.
-
When ctrl-C stops the command, all containers will stop.
-
If docker-compose up -d is used, all containers will be up and running in the background. This option is recommended in the production environment.
-
By default, if a service container already exists, docker-compose up will try to stop the container and re-create it (keeping volumes from the mounted volume with volumes-from) to ensure that the newly started service matches the latest content of the docker-comemage.yml file
down
- This command will stop
up
Command to start the container and remove the network
exec
- Enter the specified container.
ps
Docker-compose ps [options] [SERVICE…] docker-compose ps [options] .
Lists all current containers in the project.
Options:
-q
Only the container ID information is printed.
restart
Docker-compose restart [options] [SERVICE…] .
Restart services in the project.
Options:
-t, --timeout TIMEOUT
Specifies a timeout (10 seconds by default) for stopping the container before restart.
rm
Docker-compose rm [options] [SERVICE…] .
Delete all (stopped) service containers. It is recommended to stop the container by executing the docker-compose stop command.
Options:
-f, --force
Force a direct deletion, including containers that are not in a stopped state. In general, try not to use this option.-v
Delete the data volumes attached to the container.
start
Docker-compose start [SERVICE…] .
Start an existing service container.
stop
Docker-compose stop [options] [SERVICE…] .
Stops a container that is already running, but does not delete it. These containers can be started again with docker-compose start.
Options:
-t, --timeout TIMEOUT
Timeout when the container is stopped (default: 10 seconds).
top
View the processes running in each service container.
unpause
Docker-compose unpause [SERVICE…] .
Resume a service in the suspended state.
Docker visualization tool
13.1 installation Portainer
Official installation instructions: www.portainer.io/installatio…
[root@ubuntu1804 ~]#docker pull portainer/portainer [root@ubuntu1804 ~]#docker volume create portainer_data portainer_data [root@ubuntu1804 ~]#docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer 20db26b67b791648c2ef6aee444a5226a9c897ebcf0160050e722dbf4a4906e3 [root@ubuntu1804 ~]#docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 20db26b67b79 portainer/portainer "/portainer" 5 seconds ago Up 4 seconds 0.0.0.0:8000 - > 8000 / TCP, 0.0.0.0:9000 - > 9000 / TCP portainerCopy the code
13.2 Log in and use Portainer
Use your browser to visit: http://localhost:9000