What is Docker?
Docker is an open source container engine that makes it easy to create a lightweight, portable and self-contained container for any application. Developers and system administrators compile and test containers on laptops that can be deployed in batches in production environments, including VMs, Bare Metal, OpenStack clusters, clouds, data centers, and other basic application platforms. Containers are completely sandboxed, with no interfaces to each other.
Why use Docker?
Why use Docker? Start with the current pain point in the software industry
- 1. The release and deployment of software update are inefficient, and the process is tedious and requires manual intervention
- 2. Environmental consistency is difficult to ensure
- 3. Migration costs are too high between different environments
With Docker, we can solve these problems to a large extent.
First, Docker is extremely simple to use. From a development perspective, it’s a three-step process: build, ship, and run. The key step is the build step, which is to package the image file. But from a test and operations perspective, there are only two steps: copy and run. With this mirror, you can run wherever you want, platform independent. At the same time, Docker container technology isolates independent running space, so that it will not compete with other applications for system resources and there is no need to consider the interaction between applications. It is happy to think about it.
Second, because all the dependencies of the service application are taken care of when you build the image, you can ignore the dependencies and language of the original application when you use it. For testing and operations, focus more on your business content.
Finally, Docker provides a management method of development environment for developers, ensures the synchronization of environment with testers, and provides a portable standardized deployment process for operation and maintenance personnel.
What can Docker do?
- Easy to build and easy to distribute
- Isolate application dependencies
- Rapid deployment test after pin
Where does Docker apply?
- Local Dependency
Do you need to try Magento quickly on your local system, or use MySQL for a project? Or do you want to try most open source projects? Use Docker. It will save you a lot of time. Docker can improve the development efficiency of developers and let us quickly build a development environment.
The memory of the development environment machine is usually relatively small. When using virtual before, it is often necessary to add memory for the development environment machine. But through Docker, dozens of services can easily run in Docker.
- Build Environment
If you want to build source code, but find that the right environment is not ready.
Then using Docker is a solution worth considering. After all, if you use the traditional method of installing software one at a time, it takes time to install a lot of software. Why not use container technology to save time and effort? It allows you to put the runtime environment and configuration in code and deploy it. The same Docker configuration can be used in different environments, thus reducing the coupling between hardware requirements and the application environment. Here’s an example worth checking out: Docker Golang Builder.
- Micro service (Microservices)
Are you using microservices? Microservices architecture breaks down a monolithic application into loosely coupled individual services.
Consider Docker. You can package each service as a Docker image and use docker-compose to simulate a production environment (checkout Docker Networks). The practice may be time-consuming and laborious at first, but in the long run, it will eventually yield huge productivity.
- Automated Testing
Consider writing automated integration test cases that don’t take long to get up and running and that users can easily manage. This doesn’t mean running test cases in Docker, but running test cases close to the image. There is a big advantage when writing test cases against a Docker image. Here is a brief introduction of my test process: run two Docker images (app + DB), load data at MySQL startup, and use API on App Docker. Check out this script for a quick example.
- Deployment Process
You can use Docker images to self-deploy. Many of the major hosting providers support hosting Docker, and if you have a dedicated node/VM with shell access, it makes things easier. Just set up Docker and run your image on the port you want.
- Continuous Deployment
It is said that Docker is a natural fit for continuous integration/continuous deployment. With Docker in deployment, continuous deployment will become very simple and will start again after entering a new image. There are many options to automate this part of the job, and Kubernetes is a familiar name. Kubernetes is a container cluster management system, is an open source platform, can realize the container cluster automatic deployment, automatic expansion, maintenance and other functions.
- Multi-tenancy environment
One interesting use case for Docker is in multi-tenant applications, where it can avoid critical application rewriting. If you expose your application services to multiple tenants (a tenant is a group of users, such as an organization), an application designed with a single-tenant solution that uses Sub-Domain + Docker can quickly get access to multi-tenant services. An example of this scenario is developing a fast, easy-to-use multi-tenant environment for iot applications. This multi-tenant base code is complex and difficult to work with, and reprogramming such an application is a waste of time and money. With Docker, an isolated environment can be created for multiple instances of the application layer of each tenant, which is not only simple but also low-cost. Of course, all this is due to the startup speed of Docker environment and its efficient diff command.
- Multiple Apps from one machine
This is somewhat related to the microservices mentioned above, but even if you’re not using microservices and just providing them, Docker can still manage all the services on a single machine pretty well. You should use folder mounts to save data for each data-based Docker image.
- Expansion QPS (Scaling QPS)
Docker helps you scale horizontally easily by creating another container. If you run into huge peak traffic, Docker can help you out — just add more machines and increase the number of containers running behind the load balancer.
Want to fully understand the friends can refer to: too full | 10,000 words detailed explanation of Docker architecture principle, functions and use
Docker vs. Openstack
Docker Ecology At a glance
Docker installation
root@centos7 ~]# yum install docker -y
[root@centos7 ~]# systemctl start docker
Copy the code
Downloading an Image File
[root@centos7 ~]# docker pull centos:latest Trying to pull repository docker.io/library/centos ... centos7: Pulling from docker.io/library/centos 93857f76ae30: Pull complete Digest: sha256:4eda692c08e0a065ae91d74e82fff4af3da307b4341ad61fa61771cc4659af60 [root@centos7 ~]# docker images REPOSITORY TAG IO /centos Centos7 A8493f5f50FF 3 Days ago 192.5 MBCopy the code
Remove the mirror
[root@centos7 ~]# docker rmi a8493f5f50ff ## Container IDCopy the code
Docker container creation and management
1) Create a container
Method one:
[root@centos7 ~]# docker run centos /bin/echo "nihao" ## create CONTAINER nihao [root@centos7 ~]# docker ps -a ## view all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3c113f9a4f1b centos "/bin/echo nihao" 43 seconds ago Exited (0) 41 seconds ago boring_liskovCopy the code
There is no container name specified here, it is automatically named, and the status is automatically quit
Method 2: Create a container with a custom name
[root@centos7 ~]# docker run --name MGG -t -i centos /bin/bash Name assignment dummy terminal -i is enabled [root@2db7f1389dbd /]# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 22:46 ? 00:00:00 /bin/bash root 13 1 0 22:49 ? 00:00:00 ps -ef [root@centos7 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2db7f1389dbd centos "/bin/bash" 4 minutes ago Up 4 minutes mggCopy the code
Docker ps-a shows all containers including those that are not running (same as virsh list –all)
2) Enter, exit, and start the container
[root@2db7f1389dbd /]# docker start 2db7f1389dbd [root@centos7 ~]# docker attach 2db7f1389dbd # docker attach 2db7f1389dbd # docker attach 2db7f1389dbdCopy the code
With this entry mode, the container enters the Down state after exiting, as follows
[root@2db7f1389dbd /]# exit
exit
[root@centos7 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Copy the code
3) Run the nsenter command to enter the container
[root@centos7 ~]# nsenter --help Usage: nsenter [options] <program> [<argument>...] Run a program with namespaces of other processes. Options: -t, --target <pid> target process to get namespaces from -m, --mount[=<file>] enter mount namespace -u, --uts[=<file>] enter UTS namespace (hostname etc) -i, --ipc[=<file>] enter System V IPC namespace -n, --net[=<file>] enter network namespace -p, --pid[=<file>] enter pid namespace -U, --user[=<file>] enter user namespace -S, --setuid <uid> set uid in entered namespace -G, --setgid <gid> set gid in entered namespace --preserve-credentials do not touch uids or gids -r, --root[=<dir>] set the root directory -w, --wd[=<dir>] set the working directory -F, --no-fork do not fork before exec'ing <program> -Z, --follow-context set SELinux context according to --target PID -h, --help display this help and exit -V, --version output version information and exitCopy the code
Gets the PID of the container
[root@centos7 ~]# docker inspect --format "{{.State.Pid}}" 2db7f1389dbd 4580 [root@centos7 ~]# nsenter -t 4580 -u -i -n -p [root@2db7f1389dbd ~]# hostname 2db7f1389dbd [root@2db7f1389dbd ~]# exit logout [root@centos7 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2db7f1389dbd centos "/bin/bash" 22 minutes ago Up 7 minutes mggCopy the code
4) Delete the container
[root@centos7 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2db7f1389dbd centos "/bin/bash" 31 minutes ago Up 16 minutes mgg 3c113f9a4f1b centos "/bin/echo nihao" 38 minutes ago Exited (0) 38 minutes ago Boring_liskov [root@centos7 ~]# docker rm 3c113f9a4f1b # Delete a stopped container 3c113f9a4f1b [root@centos7 ~]# docker rm -f 3c113f9a4f1b ## Delete a running container [root@centos7 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2db7f1389dbd centos "/bin/bash" 31 minutes ago Up 16 minutes mgg [root@centos7 ~]# docker run --rm centos /bin/echo "hello" [root@centos7 ~]#docker --kill $(docker ps-a-q) ## delete the running containerCopy the code
Docker network mode
Dokcer provides communication between containers by using Linux Bridges, and Docker has four network modes
There are four modes as follows:
-
Host mode, specified using –net=host.
-
Container mode, specified using –net=container:NAMEorID.
-
In the none mode, specify –net=none.
-
Bridge mode, specified using –net=bridge, default
-
Host mode
If the container uses host mode, the container will not get a separate Network Namespace, but will share one Network Namespace with the host. The container will not virtualize its own network adapter, configure IP addresses, etc., but use the host’s IP address and port. It’s like running straight into the host. However, the container’s file system, process list, and so on are still isolated from the host.
- Container pattern
This pattern specifies that the newly created container shares a Network Namespace with an existing container, rather than with the host. The newly created container does not create its own network adapter and IP address, but shares IP address and port range with a specified container. Again, the two containers remain isolated except for the network aspect.
- None mode
This mode is different from the first two types. Docker container has its own Network Namespace, but Docker container does not have any Network configuration. Instead, we need to manually add network adapters and configure IP addresses to the Docker container.
- Bridge pattern
This mode is the default Network setting of Docker. In this mode, each container will be assigned a Network Namespace, and the Docker container on a host will be connected to a virtual Network bridge.
For more information about Docker container network, please refer to: Docker container network – Basic, Docker container network – Implementation.
Docker data storage
Docker manages data in one of two ways:
- Data volume
- Data volume container
The data of the default container is stored in the read and write layer of the container. When the container is deleted, the data on the container will also be lost. Therefore, in order to achieve the persistence of data, you need to choose a data persistence technology to save the data. You can use three types of storage: Volumes, Bind mounts, and TMPFS.
Data storage Mode
From now on, we will learn how to store data in Docker container. You can also learn about the three schemes of Docker data persistence.
The Bind mount overwrites the files in the container, but the volume mount does not. That is, if files already exist in the container, they are synchronized to the directory on the host. This method is similar to the mounting method in Linux, that is, the existing directories or files in the container will be overridden, but the original files in the container will not be changed. After umount, the original files in the container will be restored.
Data Volumes (Volumes)
-
Created and managed by Docker and isolated from the core functions of the host
-
Both named and anonymous data volumes are stored under /var/lib/docker/volumes
-
The data volume defined can be used in multiple containers simultaneously and is not automatically deleted
-
Allows containers to store content to remote ends, cloud service providers, encrypted content, and so on
Mounting in the host directory (Bind mounts)
- Compared to a data volume, a mounted host directory has limited functionality
- Application files or directories do not need to exist in advance and are automatically created when used
- This method allows access to sensitive files of the container, which may cause security risks
Memory mapping (TMPFS)
- Stored only in the memory of the container and never written to the file system
- The swarm service uses TMPFS mount to mount sensitive information to containers
Data volumes – Volumes
The data volume is stored in the Docker container under a specific directory
Advantage that
The Docker Volumes mechanism is commonly used to store persistent data for Docker containers. There are many advantages to using Volumes:
- Easier backup and data migration
- Use the Docker CLI command or the Docker API to manage
- It can be used on the Linux and Windows operating systems
- It can be shared more securely across multiple containers
- Volume drivers Allows the container to save Volume contents to a remote device, cloud service provider, or encrypt Volume contents
- The contents of the new Volume can be prefilled by the container
Volumes are also generally superior to the writable layer of the container, using Volumes does not increase the volume of the container, and the contents of the Volumes are stored externally independent of the lifetime of the container. If the container does not produce persistent data, consider using TMPFS memory mapping (which only keeps the data in the container’s memory) to avoid storing the data in other possible places and increasing the container’s size.
Directions for use
Initially, the -v or –volume options were used for individual containers, while the –mount option was used for cluster services. However, starting with Docker 17.06, it is also possible to use –mount on individual containers. Generally speaking, the –mount option is also more specific and detailed. The -v option groups all options into a single value, while the –mount option separates the optional options. If you need to specify the volume driver option, you must use the –mount option.
#Example Create a data volume
$ docker volume create my-vol
#View all data volumes
$ docker volume ls
#View information about a specified data volume
$ docker volume inspect my-vol
[
{
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-vol/_data",
"Name": "my-vol",
"Options": {},
"Scope": "local"
}
]
#Example To remove a specified data volume
$ docker volume rm my-vol
#None Example Clear data volumes that have no active data
$ docker volume prune
Copy the code
$docker run -d -p --name web \ -v my-vol:/wepapp \ training/webapp Python app.py $docker run -d -p --name web \ --mount source=my-vol,target=/webapp \ training/webapp Python app.py -d --name devtest-service \ --mount source=myvol2,target=/app \ nginx:latestCopy the code
# mounted as read-only mode $docker run - d - name = nginxtest \ v nginx - vol: / usr/share/nginx/HTML: ro \ nginx: latest # # source specifies the name of the data volume. The anonymous data volume can be omitted. # target indicates where the data volume is to be mounted to the container. Optional # volume-opt means that it can be used multiple times, The optional $docker run - d - name = nginxtest \ - mount source = nginx - vol, destination = / usr/share/nginx/HTML, readonly \ nginx: the latestCopy the code
Mount the remote data volume
#The plug-in SSHFS allows you to easily mount remote folders in containers
#Download the plug-in
$ docker plugin install --grant-all-permissions vieux/sshfs
#Use this driver to create SSH data volumes
$ docker volume create --driver vieux/sshfs \
-o sshcmd=test@node2:/home/test \
-o password=testpassword \
-o port=3336 \
sshvolume
#Start the driver to create the volume creation container
#If the two containers have a trusted relationship configured, you do not need to set the voluum-opt password
$ docker run -d \
--name sshfs-container \
--volume-driver vieux/sshfs \
--mount src=sshvolume,target=/app, \
volume-opt=sshcmd=test@node2:/home/test,volume-opt=password=testpassword \
nginx:latest
Copy the code
Mount the host directory – bind mounts
Mounting a host directory is to mount a specific directory on a host directly into the container for use
Directions for use
#usebindMode start container
$ docker run -d -it --name devtest \
-v "$(pwd)"/target:/app \
nginx:latest
$ docker run -d -it --name devtest \
--mount type=bind.source="$(pwd)"/target,target=/app \
nginx:latest
#Look at the corresponding information
$ docker inspect devtest
"Mounts": [
{
"Type": "bind",
"Source": "/tmp/source/target",
"Destination": "/app",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
#The mount mode is read-only
$ docker run -d -it --name devtest \
-v "$(pwd)"/target:/app:ro \
nginx:latest
$ docker run -d -it --name devtest \
--mount type=bind.source="$(pwd)"/target,target=/app,readonly \
nginx:latest
Copy the code
Special attributes
$ docker run -d -it --name devtest \
-v "$(pwd)"/target:/app \
-v "$(pwd)"/target:/app2:ro,rslave \
nginx:latest
$ docker run -d -it --name devtest \
--mount type=bind.source="$(pwd)"/target,target=/app \
--mount type=bind.source="$(pwd)"/target,target=/app2,readonly,bind-propagation=rslave \
nginx:latest
Copy the code
Memory mapping – TMPFS
Memory mapping is the mapping of memory into a container for internal use
Advantage that
Initially the — TMPFS option was for individual containers, while the –mount option was for the swarm cluster service. However, starting with Docker 17.06, you can also use –mount on individual containers. Generally speaking, –mount is more explicit and verbose. The biggest difference is that the TMPFS flag does not support any configurable options. Swarm can only be used in containers, while swarm must be used in –mount to use TMPFS memory mapping.
Directions for use
#For use on containers
$docker run -d -it --name tmptest \ --tmpfs /app \ nginx:latest
$ docker run -d -it --name tmptest \
--mount type=tmpfs,destination=/app \
nginx:latest
Copy the code
Log driver – Logs
You can view the log output outside the container to troubleshoot and monitor problems
You can use the docker logs command to view the logs generated when the application runs inside the Docker container. You can eliminate the process of first entering the Docker container and then opening the application’s log files. Docker logs monitor the operating system’s standard output device (STDOUT) in the container. Once the STDOUT has generated data, the data is transferred to another device. This is called a Logging Driver.
#Dynamically view logs
$ docker logs -f netdataHow did Docker do it? Using the Docker info command, we can see the information about the Docker container, which has a field of Logging Driver.
#This parameter specifies the type of the log driver
$ docker info | grep 'Logging Driver'
Logging Driver: json-file
Copy the code
We can set the specific Docker log driver by the –log-driver parameter in the Docker run command, and we can also specify the related options of the corresponding log driver by the –log-opt parameter.
Docker run -d -p 80:80 --name nginx --log-driver json-file --log-opt max-size= 10MB --log-opt max-file=3 \ # indicates that the maximum number of JSON files can be saved. If the number exceeds 3, unnecessary files nginx will be deletedCopy the code
#Of course, you can add it to the configuration file to take effect globally
$ cat /etc/docker/daemon.json
{
"log-driver": "syslog"
}
#Modify the configuration and restart the service
$ sudo systemctl restart docker
Copy the code
Additionally, it should be noted that Docker stores logs to a log file by default.
#Check the log file path
$ docker inspect --format='{{.LogPath}}' netdata
/var/lib/docker/containers/556553bcb5xxx13cbc588a4-json.log
#View real-time logs
$ tail -f `docker inspect --format='{{.LogPath}}' netdata`
Copy the code
The above content reference: escapelife. Making. IO/posts/c2e25…
Docker Command is introduced
After installing the Docker container service, do you need to know how to operate it? Enter docker directly on the shell command line to view the help information, as follows.
[root@master ~]# docker Usage: docker COMMAND A self-sufficient runtime for containers Options: --config string Location of client config files (default "/root/.docker") -D, --debug Enable debug mode --help Print usage -H, --host list Daemon socket(s) to connect to (default []) -l, --log-level string Set the logging level ("debug", "info", "warn", "error", "fatal") (default "info") --tls Use TLS; implied by --tlsverify --tlscacert string Trust certs signed only by this CA (default "/root/.docker/ca.pem") --tlscert string Path to TLS certificate file (default "/root/.docker/cert.pem") --tlskey string Path to TLS key file (default "/root/.docker/key.pem") --tlsverify Use TLS and verify the remote -v, --version Print version information and quit Management Commands: container Manage containers image Manage images network Manage networks node Manage Swarm nodes plugin Manage plugins secret Manage Docker secrets service Manage services stack Manage Docker stacks swarm Manage Swarm system Manage Docker volume Manage volumes Commands: attach Attach to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on Docker objects kill Kill one or more running containers load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container pause Pause all processes within one or more containers port List port mappings or a specific mapping for the container ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive (streamed to STDOUT by default) search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop one or more running containers tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE top Display the running processes of a container unpause Unpause all processes within one or more containers update Update configuration of one or more containers version Show the Docker version information wait Block until one or more containers stop, then print their exit codesCopy the code
There are many commands, but to highlight these 20, please read the following article in detail:
How many of these 20 Docker commands do you know?
Docker file
A brief introduction to Docker File
Docker can use the contents of dockerfiles to automatically build images. Dockerfile is also a file, which contains a series of commands such as create image, run instructions, and only one run command per line is supported.
Docker file is composed of four parts:
- Base image message
- Maintainer information
- Mirror operation instruction
- Instructions are executed when the container starts
The Dockerfile directive ignores case, uppercase is recommended, # is used as a comment, only one directive per line is supported, and directives can take multiple arguments.
The dockerfile directive has:
- Build directive: used to build an image, which specifies an operation that will not be performed in the container where the image is running.
- Set directive: this directive is used to set the properties of an image. The action specified will be performed in the container that runs the image.
Dockerfile instruction
There are the following types of Dockerfile directives:
- 1, the FROM
Use to specify a base mirror, and then build a new mirror by building on top of the base mirror, which typically has a remote or local repository. And the first line of the Dockerfile must be the FROM directive. If a Dockerfile needs to create multiple images, you can use multiple FROM directives.
#The specific usage is as follows:FROM < image_name > # default is latest version FROM <image:version> # specify versionCopy the code
- 2, MAINTAINER
Specifies information about the creator of the mirror
#The specific use method is as follows:
MAINTAINER < name >
Copy the code
- 3, RUN
RUN all the commands supported by the base image. You can also use multiple RUN directives. You can use \ to wrap lines
#The specific use method is as follows:RUN < command > RUN ["executable", "param1", "param2" ... ] (exec form)Copy the code
- 4, CMD
It can be a command or a script, but is executed only once, and by default only the last one will be executed if there is any.
#The specific use method is as follows:CMD [executable "Param1", "param2"] use exec, recommend CMD command Param1 param2, Run CMD [" Param1 ", "param2"] on /bin/sh to provide ENTRYPOINT as the default parameter.Copy the code
- 5, EXPOSE
Specify the port mapping for the container (container to physical machine), and add the -p parameter when the container is run to specify the ports for the EXPOSE setting. EXPOSE allows you to set multiple port numbers and run the container accordingly, using the -p parameter multiple times. You can refer to the host’s mapping port by docker port + the container’s mapping port number and container ID.
#The specific use method is as follows:
EXPOSE <port> [port1 , port2 ............]
Copy the code
- 6, ENV
Docker RUN –env key=value Docker RUN –env key=value Docker RUN –env key=value
#The specific use method is as follows:
ENV <key> <value>
ENV JAVA_HOME /usr/local/jdk
Copy the code
- 7, the ADD
Copies the specified source file, directory, and URL to the specified directory in the container. The permission of all files and folders copied to the Container is 0755, and the UID and GID are 0.
If the source is a directory, all files in the directory are added to the container, excluding the directory.
If the source file is in a recognizable compressed format, Docker will help decompress it (note the compressed format);
If the source is a file and the destination directory does not end with a slash, the destination directory is treated as a file and the source content is written to the destination directory.
If the source file is a file and the destination directory ends with a slash, the source file is copied to the destination directory.
#The specific use method is as follows:ADD < source > < target >Copy the code
- 8, COPY
Copy the local host source (default: Dockerfile directory) to a destination in the container. If the destination path does not exist, it will be created automatically.
#The specific use method is as follows:COPY < source > < target > COPY web/index.html /var/web/Copy the code
-
The path must be an absolute path. If the path does not exist, it is automatically created
-
The path must be relative to the path where the Dockerfile resides
-
If it is a directory, only the contents of the directory are copied, but the directory itself is not copied
-
9 ENTRYPOINT.
Specifies the command to be executed after the container is started. Only the last line of multiple lines is executed. And cannot be overridden by arguments provided by Docker Run.
#The specific use method is as follows:
ENTRYPOINT "command" "param1" "param2"
Copy the code
- 10 and VOLUME
Create a mount point that can be mounted from a local host or other container, typically to store data. This function can also be implemented with Docker run-v.
#The specific use method is as follows:
VOLUME [directory_name]
VOLUME /docker_data
Copy the code
- 11, the USER
Specifies the user or UID to be used when the container is RUN. RUN, CMD, and ENTRYPIONT all use this user to RUN commands.
#The specific use method is as follows:
USER [username/uid]
Copy the code
- 12, WORKDIR
Specify the RUN, CMD, and ENTRYPIONT command RUN directory. Multiple WORKDIR directives can be used, and subsequent arguments, if relative, are based on the path specified in the previous command. Example: WORKDIR /dataWORKDIR work. The final path is /data/work. Path The path can also be an environment variable.
#The specific use method is as follows:
WORKDIR [path]
Copy the code
- 13, ONBUILD
This command is used to configure a newly created mirror as a base mirror for other newly created mirrors. That is, after the image is created, if other images are based on the image, the ONBUILD command of the image will be executed first.
#The specific use method is as follows:
ONBUILD [INSTRUCTION]
Copy the code
Quickly build images with Dockerfile
Next, we build a Tomcat image to demonstrate the use of Dockerfile, the premise is to install the Docker environment, how to install the Docker environment is not herein described. Please stamp the following text:
[root@master tomcat]# ll total usage 190504 -rw-r--r-- 1 root root 9552281 6月 7 15:07 apache-tomcat-8.5.31.tar.gz-rw-r --r-- 1 Root root 32 7月 3 09:41 index. jsp-rw-r --r-- 1 root root 185515842 9月 20 2017 JDK-8u144-linux-x64.tar.gz [root@master tomcat]# cat index.jsp welcome to mingongge's web site [root@master tomcat]# pwd /root/docker/tomcat [root@master tomcat]# vim Dockerfile#config file start#FROM centos MAINTAINER Mingongge
#add jdk and tomcat softwareGz /usr/local/add apache-tomcat-8.5.31.tar.gz /usr/local/add index.jsp / usr/local/apache tomcat - 8.5.31 / webapps/ROOT /
#config java and tomcat ENVENV JAVA_HOME /usr/local/jdk1.8.0_144 ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar ENV CATALINA_HOME / usr/local/apache tomcat - 8.5.31 / ENV PATH $PATH: $JAVA_HOME/bin: $the CATALINA_HOME/bin
#config listen port of tomcat
EXPOSE 8080
#config startup command of tomcatCMD/usr/local/apache tomcat - 8.5.31 / bin/catalina. Sh run
#end of config-file#
Copy the code
The build process
[root@master tomcat]# docker build -t tomcat-web. Sending Build Context to Docker Daemon 195.1 MB Step 1/11: FROM centos ---> 49f7960eb7e4Step 2/11: MAINTAINER Mingongge ---> Running in afac1e218299
---> a404621fac22
Removing intermediate container afac1e218299
Step 3/11 : ADD jdk-8u144-linux-x64.tar.gz /usr/local/
---> 4e22dafc2f76Removing Intermediate Container b1b23c6f202a Step 4/11: ADD apache-tomcat-8.5.31.tar.gz /usr/local/ ---> 1efe59301d59
Removing intermediate container aa78d5441a0a
Step 5/11 : ADD index.jsp /usr/local/apache-tomcat-8.5.31/webapps/ROOT/
---> f09236522370Removing Intermediate Container eb54E6eb963a Step 6/11: ENV JAVA_HOME /usr/local/jdk1.8.0_144 ---> Running in 3aa91b03d2d1
---> b497c5482fe0
Removing intermediate container 3aa91b03d2d1
Step 7/11 : ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
---> Running in f2649b5069be
---> 9cedb218a8dfRemoving Intermediate Container f2649b5069be Step 8/11: ENV CATALINA_HOME /usr/local/apache-tomcat-8.5.31/ ---> Running in 39ef620232d9
---> ccab256164fe
Removing intermediate container 39ef620232d9
Step 9/11 : ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/bin
---> Running in a58944d03d4a
---> f57de761a759
Removing intermediate container a58944d03d4a
Step 10/11 : EXPOSE 8080
---> Running in 30681437d265
---> b906dcc26584Removing Intermediate Container 30681437d265 Step 11/11: CMD /usr/local/apache-tomcat-8.5.31/bin/catalina.sh run ---> Running in 437790cc642a
---> 95204158ee68
Removing intermediate container 437790cc642a
Successfully built 95204158ee68
Copy the code
Start the container with the built image
[root@master tomcat]# docker run -d -p 8080:8080 tomcat-web b5b65bee5aedea2f48edb276c543c15c913166bf489088678c5a44fe9769ef45 [root@master tomcat]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b5b65bee5aed tomcat-web "/bin/sh -c '/usr/..." 5 seconds ago Up 4 seconds 0.0.0.0:8080->8080/ TCP vigilant_heisenbergCopy the code
Access to the container
Enter http://server-ip:8080 in the browser, and the result is as follows:
Docker the three musketeers
Container Technology | Docker Three Musketeers Compose
Container technology | Docker three musketeers docker-Machine
Build a high-powered and visual Docker container monitoring system platform
Private image warehouse setup
When we perform docker pull XXX, Docker will find the image file we need from the registry.docker.com by default, and then download it. This kind of image warehouse is the default public warehouse of Docker, and everyone can directly view or download and use it. However, due to network reasons, the download speed is limited and slow. Therefore, we use dokcer in the Intranet environment of the company, and generally do not upload the image file to the public library of the public network. But internal shared use is a problem, and that’s where private repositories come in.
What is a private warehouse?
A private repository is a local (Intranet environment) mirror repository that has similar functions to a public library on the public network. Once built, we can submit the packaged image to a private repository where other Intranet users can use the image file.
This article uses the officially provided Registry image to build a private image repository on the enterprise Intranet
Environment introduction
Two hosts with the Docker environment installed
- Server: 192.168.3.82 The private repository server is in, running the Registry container
- Client: 192.168.3.83 Test client, used to upload and download image files
Installation and deployment process
Download the official Registry image file
[root@master ~]# docker pull registry
Using default tag: latest
Trying to pull repository docker.io/library/registry ...
latest: Pulling from docker.io/library/registry
81033e7c1d6a: Pull complete
b235084c2315: Pull complete
c692f3a6894b: Pull complete
ba2177f3a70e: Pull complete
a8d793620947: Pull complete
Digest: sha256:672d519d7fd7bbc7a448d17956ebeefe225d5eb27509d8dc5ce67ecb4a0bce54
Status: Downloaded newer image for docker.io/registry:latest
[root@master ~]# docker images |grep registry
docker.io/registry latest d1fd7d86a825 5 months ago 33.3 MB
Copy the code
Running the Registry container
[root@master ~]# mkdir /docker/registry -p [root@master ~]# docker run -itd -v /docker/registry/:/docker/registry -p 5000:5000 --restart=always --name registry registry:latest 26d0b91a267f684f9da68f01d869b31dbc037ee6e7bf255d8fb435a22b857a0e [root@master ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 26d0b91a267f registry:latest "/entrypoint.sh /e..." 4 seconds ago Up 3 seconds 0.0.0.0:5000->5000/ TCP RegistryCopy the code
Parameters that
1) -itd: Open a pseudo terminal in the container for interactive operation, and run in the background; 2) -v: bind the host /docker/ Registry directory to the container /docker/ Registry directory (this directory is the directory in the Registry container storing image files) to achieve data persistence; 3) -p: mapping port; Accessing the host's port 5000 accesses the services of the Registry container; 4) --restart=always: If the container exits unexpectedly, restart the container automatically; 5) --name registry: Create a container named registry, can customize any name; 6) Registry: Latest: This is the image you just pulled;Copy the code
View the image file of the remote repository
[root@master ~]# curl http://localhost:5000/v2/_catalog
{"repositories":[]}
Copy the code
You can also use a browser to visit http://server-ip:5000/v2/_catalog. The result is the same, empty without any files.
Client Operation
Modify the downloaded image source
[root@slave1 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors":["https://registry.docker-cn.com"]
}
[root@slave1 ~]# systemctl restart docker
Copy the code
Download the Test Image
[root@slave1 ~]# docker pull nginx
Using default tag: latest
Trying to pull repository docker.io/library/nginx ...
latest: Pulling from docker.io/library/nginx
683abbb4ea60: Pull complete
6ff57cbc007a: Pull complete
162f7aebbf40: Pull complete
Digest: sha256:636dd2749d9a363e5b57557672a9ebc7c6d041c88d9aef184308d7434296feea
Status: Downloaded newer image for docker.io/nginx:latest
Copy the code
TAG the mirror image
/ root @ slave1 ~ # docker tag nginx: latest 192.168.3.82:5000 / nginx: v1 / root @ slave1 ~ # docker images REPOSITORY tag IMAGE ID CREATED SIZE 192.168.3.82:5000/nginx v1 649dcb69b782 8 hours ago 109 MB docker. IO /nginx Latest 649dCB69b782 8 hours ago 109 MBCopy the code
Upload the image
/ root @ slave1 ~ # docker push 192.168.3.82:5000 / nginx: v1 The push refers to a repository 192.168.3.82:5000 / nginx Get HTTP: https://192.168.3.82:5000/v1/_ping: server gave, the HTTP response to HTTPS client#Note that an error message is displayed, which indicates that you need to use HTTPS to upload. The solution is as follows:[root@slave1 ~]# vim /etc/docker/daemon.json { "registry-mirrors":["https://registry.docker-cn.com"], "Insecure - registries:"/" 192.168.3.82:5000 "}#Add the address of the private image server. Note that the writing format is JSON, which has strict writing requirements. The configuration takes effect after the Docker service is restarted/ root @ slave1 ~ # systemctl restart docker/root @ slave1 ~ # docker push 192.168.3.82:5000 / nginx: v1 The push refers to a The repository 192.168.3.82:5000 / nginx 6 ee5b085558c: Pushed 78 f25536dafc: Pushed c46f426bcb7: Pushed v1: digest: sha256:edad5e71815c79108ddbd1d42123ee13ba2d8050ad27cfa72c531986d03ee4e7 size: 948Copy the code
View the image repository again
[root@master ~]# curl http://localhost:5000/v2/_catalog
{"repositories":["nginx"]}
[root@master ~]# curl http://localhost:5000/v2/nginx/tags/list
{"name":"nginx","tags":["v1"]}
#See what versions are available
Copy the code
Test the download
#First delete the image file downloaded from the public library before the client host[root@slave1 ~]# Docker images REPOSITORY TAG ID CREATED SIZE 192.168.3.82:5000/nginx v1 649dCB69b782 10 hours ago 109 MB docker.io/nginx latest 649dcb69b782 10 hours ago 109 MB [root@slave1 ~]# docker image rmi -f 649dcb69b782 Untagged: 192.168.3.82:5000 / nginx: v1 Untagged: 192.168.3.82:5000 / nginx @ sha256: edad5e71815c79108ddbd1d42123ee13ba2d8050ad27cfa72c531986d03ee4e7 Untagged: docker.io/nginx:latest Untagged: docker.io/nginx@sha256:636dd2749d9a363e5b57557672a9ebc7c6d041c88d9aef184308d7434296feea Deleted: sha256:649dcb69b782d4e281c92ed2918a21fa63322a6605017e295ea75907c84f4d1e Deleted: sha256:bf7cb208a5a1da265666ad5ab3cf10f0bec1f4bcb0ba8d957e2e485e3ac2b463 Deleted: sha256:55d02c20aa07136ab07ab47f4b20b97be7a0f34e01a88b3e046a728863b5621c Deleted: sha256:9c46f426bcb704beffafc951290ee7fe05efddbc7406500e7d0a3785538b8735 [root@slave1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE#In this case, all image files on the client are deleted/ root @ slave1 ~ # docker pull 192.168.3.82:5000 / nginx: v1 Trying to pull the repository 192.168.3.82:5000 / nginx... V1: Pulling from 192.168.3.82:5000/nginx 683abbb4ea60: Pull complete 6ff57CBc007a: Pull complete 162f7AEbbf40: Pull complete Digest: sha256:edad5e71815c79108ddbd1d42123ee13ba2d8050ad27cfa72c531986d03ee4e7 Status: Downloaded newer image for 192.168.3.82:5000 / nginx: v1 / root @ slave1 ~ # docker images REPOSITORY TAG image ID CREATED SIZE 192.168.3.82:5000/nginx v1 649DCB69b782 11 hours ago 109 MB#You can see that the client has obtained the required image file from the remote server, and other Intranet servers can share the image server
Copy the code
The above steps are the process and test of quickly building a private image warehouse by using Docker Registry. I can also use Harbor to build a private enterprise mirroring warehouse.
Docker visualization tool
Docker is a very popular container technology, which is widely used in all walks of life. However, how to manage the Docker container is a problem, so TODAY I introduce two Docker visualization tools to you, I hope it will be helpful to you.
Portainer
Portainer is a Docker visual management tool that allows us to easily view and manage Docker containers in web pages.
Using the Portainer is as simple as running the following two commands. These commands create a volume dedicated to the Portainer, then create and run the containers on ports 8000 and 9000.
$ docker volume create portainer_data$ docker run --name portainer -d -p 8000:8000 -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
Copy the code
Then open the corresponding address in the browser, you will find that the successful operation. The first time you run it, you need to set the account, and then select the Docker host you want to manage.Set up accountSelect hosts to manage
After that, you can see the Docker container running on the local machine. Click on them to manage the container. The entries on the left allow you to manage volumes, create containers, view host information, and so on. It’s pretty much all there is, and it’s one of the tools I recommend.
LazyDocker
LazyDocker is a terminal based visual query tool that supports keyboard operation and mouse click. It may not be as professional as Portainer, but it may be easier for developers to use. Because the general developers are using the command line to run Docker, occasionally need to graphical view, you can use LazyDocker this tool.
Official website demo
Installing LazyDocker is as simple as running the following command.
docker run --rm -it -v \/var/run/docker.sock:/var/run/docker.sock \-v ~/.config/lazydocker:/.config/jesseduffield/lazydocker \lazyteam/lazydocker
Copy the code
Of course, if you find LazyDocker useful and want to use it more often, you can abbreviate it and add it to your shell configuration file, so you can turn it into a simple command. For example, if I’m using ZSH, add the following to the.zshrc file. From now on, you can call LazyDocker directly with LZD.
alias lzd='docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v ~/.config/lazydocker:/.config/jesseduffield/lazydocker lazyteam/lazydocker'
Copy the code
You can then view information about the Docker container, image, and volume in the terminal. LazyDocker supports keyboard operation and mouse click, directly with the mouse click can view the corresponding information.
Note that if your terminal LazyDocker’s graphics display is messy, don’t worry, it’s just a matter of displaying fonts. Resetting the terminal font will solve this problem.
The above content from: www.toutiao.com/i6780014313…
The Docker community has created a number of open source tools that help us deal with various use cases. The five most useful Docker tools in this article are Watchtower, Docker-GC, Docker-Slim, and Rocker: Break the limits of dockerfiles, and cTOP (the container’s class-top-level interface). The Docker community has created many open source tools that can help you handle more use cases than you can even imagine. You can find a lot of cool Docker tools online, most of which are open source and available on Github. Over the past two years, I’ve been a huge fan of Docker and have used it in most of my development projects. Once you start using Docker, you’ll find that it works in more scenarios than you might initially expect. You’ll want Docker to do as much as possible for you, and it won’t let you down! The Docker community is very active, with so many useful tools popping up every day, it’s hard to keep an eye on all the innovations happening in the community. To help you, I have collected some interesting and practical Docker tools that I use in my daily work. These tools have improved my work efficiency and reduced the work that needs to be done manually.
5 Open Source Docker tools You should know… ,Docker service terminal UI management tool. People finally choose their own tools to manage Docker containers according to their own usage habits and actual production requirements.
Docker container monitoring system
With the full dockerization of online services, the monitoring of Docker containers becomes very important. The SA monitoring system is the monitoring of physical machines. When a physical machine runs multiple containers, it is impossible to distinguish the resource usage of each container from a monitoring chart.
Recommend you to look at this article: build a high force, visual Docker container monitoring system platform
Docker log management best practices
10 Less popular but very practical Docker tips
In daily work, Docker is exposed to a lot of, in addition to often used docker run, docker stop and other commands, Docker has a lot of very useful but not often used commands, the following is to summarize:
1. docker top
This command is used to view process information in a container. For example, if you want to see how many nginx processes are in a container, you can do this:
docker top 3b307a09d20d
UID PID PPID C STIME TTY TIME CMD
root 805 787 0 Jul13 ? 00:00:00 nginx: master process nginx -g daemon off;
systemd+ 941 805 0 Jul13 ? 00:03:18 nginx: worker process
Copy the code
2. docker load && docker save
I usually use these two commands to download the Kubernetes image package, because you know the Internet speed in China is not as fast as abroad.
Docker Save saves an image to a tar file. You can do this:
~ docker save Registry :2.7.1 >registry-2.7.1.tar#Meanwhile, Docker Load can import images from tar files into Docker
~ docker load < registry-2.7.1.tar
Copy the code
3. docker search
This command helps you easily search for images in DockerHub from the command line, such as:
~ docker search nginx NAME DESCRIPTION STARS OFFICIAL AUTOMATED nginx Official build of Nginx. 13519 [OK] Jwilder/Nginx-Proxy Automated Nginx Proxy for Docker Con... 1846 [OK] richarvey/nginx-php-fpm Container running nginx + php-fpm capable of running... 1846 [OK] Richarvey /nginx-php-fpm Container running Nginx + php-fpm capable of running... 780 [OK] LinuxServer/Nginx An Nginx Container, brought to you by LinuxServer... 12 Bitnami /nginx bitnami nginx Docker Image 87 [OK] Tiangolo /nginx-rtmp Docker Image with nginx using the nginx-rtmp... 85 [OK] jc21/nginx-proxy-manager Docker Container for Managing Nginx proxy ho... 73 Alfg /nginx-rtmp nginx, nginx-rtmp-module and FFmpeg from sou... 71 [OK] Nginxdemos/Hello NGINX Webserver that serves a simple Page co... 57 [OK] jlesage/nginx-proxy-manager Docker container for Nginx Proxy Manager 53 [OK] nginx/nginx-ingress NGINX Ingress Controller for Kubernetes 37 ......Copy the code
Of course, this feature may not work particularly well at home, since……
4. docker events
This command can help you get real-time information about various docker events, such as the creation of a container.
~ docker events
2020-07-28T21:28:46.000403018+08:00 image load sha256:432bf69f0427b52cad10897342eaf23521b7d973566354118e9a59c4d31b5fae (name=sha256:432bf69f0427b52cad10897342eaf23521b7d973566354118e9a59c4d31b5fae)
Copy the code
5. docker update
When you run docker and find that some parameters are not in the state you want, for example, you set the Nginx container CPU or memory is too small, you can use docker Update to change these parameters.
~ docker update nginx --cpus 2
Copy the code
6. docker history
Use this command when you have changed an image, but have forgotten the change commands for each layer, or you want to see how an image is built. For example:
~ Docker History traefik:v2.1.6 IMAGE CREATED CREATED BY SIZE COMMENT 5212A87DDABA 5 months ago /bin/sh c #(NOp) LABEL Org. Opencontainers.... 0B <missing> 5 months ago /bin/sh -c #(nop) CMD ["traefik"] 0B <missing> 5 months ago /bin/sh -c #(nop) ENTRYPOINT ["/ entryPoint.... 0B <missing> 5 months ago /bin/sh -c #(nop) EXPOSE 80 0B <missing> 5 months ago /bin/sh -c #(NOp) COPY File :59a219a1fb7a9dc8... 419B <missing> 5 months ago /bin/sh -c set -ex; apkArch="$(apk --print-... 52.9MB <missing> 5 Months ago /bin/sh -c apk --no-cache add ca-certificate... 1.85MB <missing> 6 months ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B <missing> 6 months ago /bin/sh -c #(nop) ADD file:a1906f14a4e217a49... 4.81MBCopy the code
7. docker wait
This command is used to check the exit status of the container, for example:
~ docker wait 7f7f0522a7d0
0
Copy the code
This way you can tell if the container exits normally or unexpectedly.
8. docker pause && docker unpause
This command is used when you run a container but want to pause it.
~ docker pause 7f7f0522a7d0
Copy the code
9. docker diff
This command is used when you run a container and you don’t know which files have been changed. For example:
~ docker diff 38c59255bf6e
C /etc
A /etc/localtime
C /var
C /var/lib
A /var/lib/registry
Copy the code
10. docker stats
This is docker’s built-in monitoring command. You can use this command when you want to check the memory and CPU usage of all containers on the current host.
~ Docker Stats CONTAINER ID NAME CPU % MEM USAGE/LIMIT MEM % NET I/O BLOCK I/O PIDS 1C5ADE04E7f9 redis 0.08% 17.53MiB / 47.01GiB 0.04% 10.9GB / 37GB 0B / 0B 4 AFE6D4EBE409 Kafka-exporter 0.09% 16.91MiB / 47.01GiB 0.04% 1.97GB / 1.53GB 752MB / 0B 23 f0c7c01a9c34 kafka-docker_zookeeper 0.01% 308.8MiB / 47.01GiB 0.64% 20.2MB / 12.2MB 971MB / 3.29MB 28 Da8c5008955f kafka-docker_kafka-manager 0.08% 393.2MiB / 47.01GiB 0.82% 1.56MB / 2.61MB 1.14GB / 0B 60 c8d51C583c49 Kafka-docker_kafka 1.63% 1.256GiB / 47.01GiB 2.67% 30.4GB / 48.9GB 22.3GB / 5.77GB 85......Copy the code
Original text: suo. Im / 6 n2lla
Learning Docker: the 11 most Common mistakes beginners make!
Many end up using Docker instead. Docker has many advantages, such as:
- 1. Integration — Packing operating systems, library versions, configuration files, applications, etc. into containers. This ensures that the images tested by QA will carry the same behavior to production.
- 2. Lightweight — Minimal memory footprint, allocated only for major processes.
- 3. Fast read – Start with one click, as fast as the common Linux process.
In spite of this, many users still just regard containers as common virtual machines and forget an important feature of containers: Because of this feature, some users need to change their concept of containers. In order to better use and play the value of Docker containers, there are some things that should never be done:
1. Do not store data in containers
Containers can be interrupted, replaced, or broken. Version 1.0 applications running in containers can easily be replaced by version 1.1 without affecting or causing data loss. Therefore, if you need to store data, store it in volumes. In this case, you should also pay attention to whether the two containers write to the same volume, which can cause corruption. Ensure that the application is suitable for writing to a shared data store.
Some people think of containers as virtual machines
So most of them think that the application should be deployed into an existing running container. This may be true during the development phase, which requires constant deployment and debugging; But for the continuous delivery (CD) channels of QA and production, the application should be part of the mirror. Remember: containers are fleeting.
3. Do not create a large image
Large images are difficult to allocate. Be sure to run the application only with the required files and libraries. Do not install unnecessary packages and do not run ‘yum update’, which downloads a large number of files to the new mirror layer.
4. Do not use single-layer mirrors
To take advantage of multiple file systems, always create your own basic mirror layer for the operating system, then a layer for user name definitions, then a layer for runtime installations, then a layer for configuration, and finally a layer for applications. This makes it easier to recreate, manage, and assign images.
5. Do not create images from a running container
In other words, do not use the “docker commit” command to create an image. This method of image creation is not replicable and should be avoided entirely. Always use Dockerfile or any other fully replicable S21 (source-to-image) method so that changes to Dockerfile can be tracked if stored in a source control repository (GIT).
6. Don’t just use the “latest edition” TAB
The latest version of the TAB is like a “SNAPSHOT” for Maven users. Containers have the basic feature of multiple file systems, so we encourage the use of tags. You don’t want to build an image for months only to find that your application doesn’t work because the parent layer (i.e. FROM in the Dockerfile) has been replaced by a new version (the new version is not backward compatible or the “latest” version retrieved FROM the build cache is incorrect). The “latest version” tag should also be avoided when deploying containers in production, because the currently running mirrored version cannot be tracked.
7. Never run more than one process in a single container
The container works best when running only one process (HTTP daemon, application server, database), but when running more than one process, you have a lot of trouble managing and retrieving logs and updating processes separately.
8. Do not store certificates or use environment variables in the image
Do not hard-code any user name/password in the image. Use environment variables to retrieve information from outside the container. The Postgres mirror is an excellent illustration of this principle.
9. Do not run the process as root
“By default, Docker containers run as root. As Docker’s technology matures, the number of security default options available is increasing. Currently, requiring root is dangerous for other users, and not all environments can use root. The image should use the USER directive to specify a non-root USER for the container to run.” (From Guidance for Docker Image Authors)
10. Don’t rely on IP addresses
Each container has its own internal IP address, which may change if you start and then stop the container. If your application or microservice needs to communicate with another container, use environment variables to pass the appropriate hostname and port between containers.
11. Monitor container Docker
Monitoring has become more and more important for developers. For real-time monitoring of Docker, Cloudinsight is recommended here. Unlike some monitoring methods that require self-scripting, Cloudinsight is a free SaaS service that provides one-click Docker monitoring with a great visual interface. In addition, Cloudinsight supports monitoring of multiple operating systems, databases, and more. It can display the performance data of all the underlying components of the monitored system in one piece.
Original text: my.oschina.net/cllgeek/blo…
Jenkins and Docker’s automated CI/CD combat
I. Release process design
Workflow:
- Developers commit code to the Git repository;
- Jenkins manually/periodically triggers project builds;
- Jenkins pulls the code, encodes the code, packages the image, and pushes it to the image repository;
- Jenkins created the container on the Docker host and released it.
Iii. Deployment process
1, the deployment of git
If the company has direct cloning can
Git clone [email protected]: / home/git/solo. The gitCopy the code
2. Deploy the Jenkins environment
Deployment Portal: Jenkins+Maven+Svn implements automatic code packaging and distribution
3. Deploy a private image repository
Note: Docker repository due to HTTPS authentication, all clients need to pull, need to modify the configuration file
[root@linux-node1 ~]# vim /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runsThe OPTIONS = '-- selinux enabled - insecure - registry 192.168.56.11:5000'Copy the code
4. Install Docker on all hosts
Yum install -y yum-utils device-mapper-persistent-data LVM2 ' Yum - config - manager - add - repo https://download.docker.com/linux/centos/docker-ce.repo 3) install the Docker CE yum install Docker - ce - y ` 4) configuration curl - sSL accelerator https://get.daocloud.io/daotools/set_mirror.sh | sh -s ` at http://bc437cce.m.daocloud.io #Because the default source will go abroad to get data, so it will be slow and can time out, so we need to configure the accelerator to point to the domestic source https://www.daocloud.io/5) Start and start# systemctl start docker
# systemctl enable docker
Copy the code
Fourth, build the base image
Apache, Nginx, Tomcat, LNMP, LAMP, LNTP
JAVA programs must have a JDK environment to run. To reduce the image size and improve performance, the JDK is put directly on the host and the container is used as a mount.
1. Install JDK
#Rz uploads the tar package, decompresses it, and puts it in the specified directory
rz.......
tar -zxvf jdk-8u60-linux-x64.tar.gz
mv jdk1.8.0_60 /usr/local/jdk1.8
Copy the code
2, write Dockerfile
# cat Dockerfile
FROM centos:7
#Who is his mother
MAINTAINER www.aliangedu.com
#Who is his fatherENV VERSION = 8.5.33#Tomcat version
ENV JAVA_HOME /usr/local/jdk
#JDK absolute path
RUN yum install wget -y
#Run command
RUN wget http://mirrors.shu.edu.cn/apache/tomcat/tomcat-8/v${VERSION}/bin/apache-tomcat-${VERSION}.tar.gz &&
tar zxf apache-tomcat-${VERSION}.tar.gz &&
mv apache-tomcat-${VERSION} /usr/local/tomcat &&
rm -rf apache-tomcat-${VERSION}.tar.gz /usr/local/tomcat/webapps/* &&
mkdir /usr/local/tomcat/webapps/ROOT
EXPOSE 8080
#The port used by the program
CMD /usr/local/tomcat/bin/catalina.sh run
#This is where -v hangs the host JDK directory into the /usr/ containerlocalEXPOSE and CMD 2 lines were temporarily deleted, and then repackaged. Use -p to specify the port, then enter the container, and manually start Tomcat
Copy the code
3. Create a mirror
Docker build -t 192.168.56.11:5000 / tomcat - 85: the latest -f dockerfile.#This last point. Represents the current path. Context content will be recorded during mirror creation
Copy the code
4. Upload to the Docker image repository
Root @ node02 scripts] # docker push 192.168.56.11:5000 / tomcat - 85: the latestCopy the code
5. Start the mirror test
[root @ node02 scripts] # docker run - it - d - p 8080:8080 - v/usr/local/jdk1.8: / usr/local/JDK 192.168.56.11:5000 / tomcat - 8: latest [root @ 3 addff07c464 root] # echo "123" > index. The JSPCopy the code
5. Jenkins configuration
1. On the home page, choose System Management > Global Tool Configuration
Keep Git by default:
2. Jenkins install the necessary plugins
Home page -> System Administration -> Manage Plug-ins:
Install the SSH and Git Parameter plug-ins.
Plug-in description:
- SSH: Used to SSH a remote Docker host to execute Shell commands
- Git Parameter: Dynamically obtain Branch and Tag of Git repository
3. Configure the SSH plug-in
Step 1: Create a credential to connect to the Docker host (a user with permission)
Home page -> Credentials -> System -> Right click global Credentials -> Add Credentials:
Enter the username and password to connect to the Docker host:
Step 2: Add an SSH remote host
Home page -> System Management -> System Settings -> SSH remote hosts:
Problem: When using Docker Images as a normal user, the following error occurs:
6. Upload JAVA projects downloaded from Github to your Own GitLab repository
# git clone https://github.com/b3log/solo
# cd soloRemove old push address, add new:# git remote remove origin
# git remote add origin [email protected]:qqq/solo.gitCommit the code to the Git repository and create a tag:# touch src/main/webapp/a.html
# git add .
#Git commit -m "a"Create a label:#Git tag 1.0.0Push to Git server:#Git push origin 1.0.0
Copy the code
Check out the SOLO project at Gitlab:
Jenkins creates projects and releases tests
1. On the home page -> New Task -> Enter the task name to build a Maven project:
Note: If the “Build a Maven project” option is not displayed, you need to install the “Maven Integration Plugin” in the admin plugin.
Configure Git parameterized builds:
2. Dynamically obtain Git repository tag, and interact with users to select tag release:
3. Specify the Git repository address of the project:
Change */master to $Tag. Tag is the variable name dynamically obtained above, indicating that the code version is selected according to the user.
4. Set maven build command options:
clean package -Dmaven.test.skip=ture
Build the project with the pom.xml file.
In Jenkins, the local image is built and pushed to the image repository, and SSH remote connection to the Docker host to create the container using the pushed image:
In the preceding figure, the command is as follows:
The REPOSITORY = 192.168.56.11:5000 / solo: ${Tag}#Build the mirrorThe cat > Dockerfile < < EOF the FROM 192.168.56.11:5000 / tomcat - 8: latest RUN rm - rf/usr/local/tomcat/webapps/ROOT COPY target/*.war /usr/local/tomcat/webapps/ROOT.war CMD ["/usr/local/tomcat/bin/catalina.sh", "run"] EOF docker build -t $REPOSITORY .#Upload the image
docker push $REPOSITORY
Copy the code
The Command in the figure above is as follows:
The REPOSITORY = 192.168.56.11:5000 / solo: ${Tag}#The deployment ofsudo docker rm -f blog-solo |true sudo docker image rm $REPOSITORY |true sudo docker container run -d --name blog-solo - v/usr/local/jdk1.8: / usr/local/JDK -p 8080: $8080 to the REPOSITORY#-D runs in the background, -v hangs in the directory, -p maps the port, followed by a mirror
Copy the code
Note: The container name of blog-solo exposes host port 8080, that is, use host IP 192.168.56.12:8080 to access the blog-solo project.
The blog-SOLO project has been configured. Start building:
Select the tag and start building:
Click in the lower left corner of build History and right click on the first view console output:
Building the details
Building a successful
Access: 192.168.56.12:8080 To view the deployment result
Adjust the item access address
Go to the container and switch to the project directory
vi WEB-INF/classes/latke.properties
#### Server ####
# Browser visit protocol
serverScheme=http
# Browser visit domain nameServerHost = 192.168.56.12# Browser visit port, 80 as usual, THIS IS NOT SERVER LISTEN PORT!
serverPort=8080
Copy the code
After adjustment, restart Tomcat, verify again, OK, the result is as follows:
Now that the automated CI environment is set up, you can simulate the automated release process by submitting code and tagging it.
Viii. Problem Summary:
Check the docker.sock permission
[root@node03 ~]# ll /var/run/docker.sock
srw-rw---- 1 root docker 0 9月 4 21:55 /var/run/docker.sock
Copy the code
Solution: use docker images without sudo
[root@node03 ~]# sudo groupadd docker
##groupadd: "docker" group already exists
[root@node03 ~]# sudo gpasswd -a jenkins docker
## Adding user "Jenkins" to the "Docker" group
[root@node03 ~]# sudo service docker restart
## Restart service
[root@node03 ~]# newgrp - docker
#Reload the group information. Make sure to enter this command, otherwise the latest group content cannot be loaded because of the cache
Copy the code
Original: www.toutiao.com/a6602838654…
Automatically deploy Spring Boot applications using GitLab CI and Docker
Docker common and difficult problems solution
The main purpose here is to record the problems encountered when using Docker and their solutions.
1.Docker migrates the storage directory
By default, the system stores the Docker container under /var/lib/docker
Cause of the problem: Today through the monitoring system, found that one of the company’s server disk fast, immediately looked up, found that /var/lib/docker this directory is very large. For the above reasons, we all know that what is stored in /var/lib/docker is the storage related to the container, so it cannot be deleted at will.
Be prepared to migrate the Docker storage directory or expand the /var device to achieve the same goal. For more details about the parameters of Dockerd, please click on the official document address.
However, it should be noted that try not to use soft links, because some Docker container choreography systems do not support this, such as k8S, which we are familiar with.
#The container cannot be startedERROR: cannot create temporary directory!#Check the system storage status
$du -h --max-depth=1
Copy the code
Solution 1: Add a soft link
#1. Stop docker services
$sudo systemctl stop docker
#2. Start the directory migration
$sudo mv /var/lib/docker /data/
#3. Add a soft link
#sudo ln -s /data/docker /var/lib/docker
#Start the Docker service
$sudo systemctl start docker
Copy the code
Solution 2: Modify the Docker configuration file
#3. Modify the Docker startup configuration file
$sudo vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --graph=/data/docker/
#4. Modify the Docker startup configuration file
$sudo vim /etc/docker/daemon.json
{
"live-restore": true,
"graph": [ "/data/docker/" ]
}
Copy the code
Precautions for operation: Pay attention to the commands used when migrating the Docker directory. Either use the mv command to move directly, or use the cp command to copy files, but pay attention to copy the file permission and corresponding attributes at the same time, otherwise there may be permission problems when using. If the root user is also used in the container, this problem does not exist, but you also need to follow the correct operation to migrate the directory.
#Using the mv command
$sudo mv /var/lib/docker /data/docker
#Using the cp command
$sudo cp -arv /data/docker /data2/docker
Copy the code
In the following picture, the container is started with a common user running process, and the/TMP directory is used to run, and the result is that there is no permission. When we import the container image, we actually give the permissions and properties to each directory that the container needs to start up. If we simply copy the contents of the file with the cp command, there will be inconsistent properties, and there will be some security issues.
Write, write, more than the word limit, OMG, more troubleshooter check out: Angry liver all night! Docker’s solution for common problems has been wanked and is about to crack…
The most complete and detailed Docker learning materials in history are recommended for you to have a look.
Beginner learning Program
Recommend to everyone: Recommend 11 Docker practice projects that are easy to use