The installation
-
Install using the method in Aliyun (recommended)
-
Log in to Aliyun, click the console in the upper right corner, and then click: 1. The icon in the upper left corner —->2. Products and Services —->3. Container image service
- Then: 1. Click image Accelerator —->2. Select system type —->3. Click the Docker-CE link to go to —->4. Configure the image accelerator after installation
-
Install using the method described in the docker official website documentation
-
Open docs.docker.com, select the system type, and follow the steps to install
Configuration mirror accelerator
Refer to the previous section “Install using methods in Ali Cloud”
CentOS7 Starts the Docker command
-
Start/restart/stop/view state: systemctl start/restart/stop/status docker
-
Reload the configuration file: systemctl reload docker, then restart (configuration file /etc/dock/daemon.json)
-
Enable /disable Automatic startup upon system startup: systemctl enable/disable docker
Docker command format
-
docker [OPTIONS] COMMAND
The help command
-
Docker [COMMAND] –help: docker [COMMAND] –help: docker [COMMAND] –help: docker [COMMAND
-
Docker version: Indicates the version of the docker
-
Docker info: displays docker system information, including the number of images and containers
The mirror
-
UnionFS: Is a layered, lightweight, and high-performance file system that allows changes to the file system to be layered on top of each other as a single commit, Unite several directories into a single virtual file system. The Union file system is the basis for Docker images. Images can be inherited by layers, and specific application images can be created based on the base image (without a parent image). Feature: Multiple file systems are loaded simultaneously, but only one file system can be seen from the outside. Joint loading adds all layers of file systems together, so that the final file system contains all the underlying files and directories.
-
Docker image loading principle: Docker image is actually composed of a layer of file system, such a layer of file system is UnionFS, mainly consists of two:
-
Bootfs (Boot File System) mainly consists of bootLoader and kernel. BootLoader is mainly the boot and loading kernel. When Linux starts up, bootFS file system is loaded, and bootfs is at the bottom of Docker image. This layer is the same as our typical Linux/Unix system, containing the Boot loader and the kernel. After boot is loaded, the entire kernel is in memory. At this time, the use of memory has been transferred to the kernel by bootFS. At this time, the system will uninstall bootFS.
-
Root file System (rootFS) is above bootfs. It contains standard directories and files such as /dev, /proc, /bin, and /etc in a typical Linux system. Rootfs is a variety of operating system distributions, such as CentOS, Ubuntu, etc. CentOS in Docker can only be 200M, because rootfs and simplification, only contains the basic content (bootfs is universal, CentOS, Ubuntu, etc use the same boofs).
-
Layered mirroring: When you pull an image, you will find that more than one image is downloaded. For example, a tomcat is nearly 500 MB, kernel >centos >jdk8–>tomcat (not comprehensive), so it is so large.
-
Why Docker images are layered? The biggest benefit: sharing resources. For example, if multiple images are built from the same base image, the host only needs to save one base image on disk and load only one base image in memory to serve all containers. And each layer of the mirror can be shared.
-
Features: Docker images are read-only. When the container is started, a new writable layer is loaded onto the top of the image, usually called the container layer, and everything below the container layer is called the image layer.
The mirror command
An image is a lightweight, executable, standalone package that packages a software runtime environment and software developed based on the runtime environment. It contains everything you need to run a piece of software, including code, runtime, libraries, environment variables, and configuration files.
-
Docker images [OPTIONS]: Lists local images
-
-a: Lists all local images (including intermediate image layer)
-
-q: Displays only the mirror ID
-
–digests: Display ests of images
-
–no-trunc: displays the complete image description
-
Docker search [OPTIONS] docker hub: http://hub.docker.com.hub.docker.com
-
–no-trunc: displays the complete image description
-
-s number: lists the mirrors whose collection number is not less than the specified value
-
— Automated: Lists only the images of the automated Build type
-
Docker pull image name [:TAG]: download the image to the local PC. If no version is available, latest is used by default
-
Docker rmI [OPTIONS] Image name [:TAG]/ image ID: Deletes the image. If multiple images are deleted, separate them with Spaces
-
-f: Forcibly deletes an image, regardless of whether the Container is running
-
Docker Rmi-f (Dockerimages − QA): clear the image, (Dockerimages − QA): clear the image, () can also be used instead
-
Docker commit -m=” Commit description “-a=” author” Container ID Name of the target image to be created :[label name]: Commit the container copy to make it a new image
Container order
Docker uses containers to run one or a group of applications independently. Containers are running instances created with images. It can be started, started, stopped, and deleted. Each container is an isolated, secure platform. You can think of a container as a simplified version of the Linux environment (including root user permissions, process space, user space, network space, and so on) and the applications running in it. A container is almost exactly the same as a mirror, a unified view of a bunch of layers, the only difference being that the top layer of the container is readable and writable.
-
docker run [OPTIONS] image [COMMAND] [ARG…] : Runs the container based on the image
-
–name=” New container name “: Specify a name for the container
-
-d: Runs the container in the background and returns the container ID, that is, starts the daemon container
-
Docker ps-a will show that the container has exited
-
Docker container to run in the background, there must be a foreground process
-
Commands run by the container that are not always pending (such as top, tail) will exit automatically because it thinks it has nothing else to do. This is how Docker works
-
docker run -d centos /bin/sh -c “while true; do echo hello zzyy; sleep 2; Done “: this command makes the centos daemon process print hello zzyy every 2 seconds. Therefore, the container instances run this command will not exit automatically.
-
-I: Runs the container in interactive mode, that is, starts the interactive container. Usually used in conjunction with -t
-
-t: allocates a pseudo-input terminal (a terminal similar to CentOS) to the container. It is usually used together with -i
-
-p: indicates random port mapping
-
-p: specifies port mapping. It can be in the following formats: containerPort is the default Tomcat 8080, and hostPort is the external access port
-
ip:hostPort:containerPort
-
ip::containerPort
-
hostPort:containerPort
-
ContainerPort: if hostPort is not specified, it is randomly assigned
-
Docker create: Creates a new container, through run, but does not start the container
-
Docker ps [OPTIONS]: lists running container instances
-
-a: Lists all currently running container instances + historically running container instances
-
-l: displays the newly created container
-
-n: Displays the n containers created recently
-
-q: displays only container ids in silent mode
-
–no-trunc: does not truncate the output
-
There are two ways to exit a running container:
-
Exit: The container stops exiting
-
CTRL +P+Q: The container does not stop exiting. Use Docker Attach to re-enter
-
Docker start Container ID/ container name: Start container
-
Docker restart Container ID/ container name: Restarts the container
-
Docker stop Container ID/ container name: Stop container
-
Docker kill Container ID/ container name: forcibly stop the container
-
Docker Pause container ID: indicates the pause container
-
Docker unpause container ID: unpause container ID
-
Docker rename: Renames a container
-
Docker rm [OPTIONS] Container ID/ Container name: Deletes stopped containers
-
-f: Forcibly deletes the vm, no matter whether the vm is running or not
-
Docker rm -f (dockerps− QA): delete all containers, (dockerps− QA): delete all containers, () can also be used instead
-
Docker ps – qa | xargs docker rm: remove all containers
-
Docker logs -f -t –tail number Container ID: View container logs
-
-t: adds the timestamp
-
-f: prints the latest logs
-
–tail number: Displays the last number
-
Docker Top Container ID: View the processes running in the container
-
Docker Inspect Container ID: View details inside the container as a JSON string
-
Enter the running container and interact with it on the command line
-
Docker Attach container ID: Re-enter the terminal, directly enter the terminal of the container start command, no new process will be started
-
Docker execit container ID bashShell: does not display terminal execution, is to open a new terminal in the container, and can start a new process, bashShell is the command
-
Docker exec -t container ID /bin/bash: has the same effect as attach, but opens a new terminal, and exit cannot close the container, which is still running
-
Docker cp Container ID: Inside container path Destination host Path: Copy files from the container to the host
Other commands
-
Diff: View container changes
-
Events: Gets real-time container events from the Docker service
-
Export: Export the content stream of the container as a tar archive (corresponding to import)
-
Import: Create a new file system image from the contents of the tar package (corresponding to export)
-
Docker load -i target.tar: load an image from a tar package (corresponding to save)
-
Docker save -o source.tar
-
Login: Registers or logs in to a Docker source server
-
Logout: Exits the current Docker Registry
-
Port: View the internal source port corresponding to the mapped port
-
Wait: Intercepts the value of the container’s exit status when it stops
-
Stats: Displays container resource usage in real time
-
Update: Updates the container configuration
Docker container data volume
-
The environment applied to the run is packaged to run in a container, but we want our data requirements to be persistent. Containers want the possibility to share data. If the data generated by the Docker container is not generated through the Docker commit, so that the data is saved as part of the image, then when the container is deleted (i.e. deleted if it does not run), the data will naturally disappear. To save data, use volumes in Docker.
-
A volume is a directory or File that exists in one or more containers and is mounted to the container by a Docker, but is not a federated File System and thus provides some features for storing or sharing data continuously, bypassing the Union File System. Volumes are designed to persist data and are completely independent of the container’s life cycle, so Docker does not delete its mounted volumes when the container is deleted.
-
Features: Data volumes can share or reuse data between containers; Changes in the volume take effect directly; Changes in the data volume are not included in the update of the mirror; The life cycle of a data volume continues until no container can use it.
-
Docker run -v Absolute path of the host: absolute path of the container [:ro] image: the directory bound to the host and the container directory
-
If the directory does not exist, a directory is automatically created.
-
Add :ro (readonly), so the container can only be viewed, cannot be added, deleted or modified;
-
This method can bind only one host directory (to bind more than one, then space -v… DockerFile can be used to bind multiple files.
-
If cannot open directory.: Permission denied, add –privileged=true to the image name.
-
DockerFile is the image template description file, has its own syntax rules, create a new file, edit the content:
FROM centos VOLUME [“/volumeData1″,”/volumeData2″,”/volumeData3”] CMD echo “success” CMD /bin/bash
-
Then use docker build -f file absolute path -t image name [:TAG]., generate image, then run a container docker run -it image name. You can observe the above three container volume directories.
-
Using docker inspect container ID, it can be found that Docker will automatically bind the host directory.
-
Docker build -f file absolute path -t image name [:TAG].: DockerFile file into a mirror, the last English period can not be less, represents the current path, using the DockerFile VOLUME command binding container VOLUME, docker inspect container ID view binding corresponding host directory.
-
Data volume container: A named container to which a data volume is attached and to which other containers share data. This container is called a data volume container. Volumes from volumes shared between containers example:
-
Start a parent container docker run-it –name DCF mycentos. The current container contains three data volume directories. Create files in one of the directories.
-
Docker runit –name dcs01 –volumes-from DCF mycentos docker runit –name dcs01 –volumes-from DCF mycentos
-
Docker runit –name dcs02 –volumes-from DCF mycentos docker runit –name dcs02 –volumes-from DCF mycentos
-
If you re-check the DCF directory, you will find three files, indicating that data from parent to child, child to parent, and child to child can be shared.
-
After terminating the DCF, you can view the data volume directory in DCS01 and find that the file is still there. Create a new file. Then you can view the data volume directory in DCS02 and find that the old file and the new file created by DCS01 are both there, indicating that configuration information is transferred between containers. In fact, using the Docker inspect command to check the data volume information of the three containers, you can find a certain directory of the host bound to the three containers.
DockerFile
-
DockerFile is a build file used to build a Docker image. It is a script composed of a series of commands and parameters
-
Docker build—->docker build—-> Docker run
-
Docker base image Scratch, similar to Object in Java
-
Build process analysis:
-
DockerFile content basics
-
Each reserved word instruction must be uppercase and followed by at least one argument
-
Instructions are executed from top to bottom
-
# comment
-
Each instruction creates a new image layer and commits the image
-
Docker executes the general process of DockerFile
-
Docker runs a container from a base image (scratch, the most basic image, is similar to Object in Java)
-
Executes an instruction and makes changes to the container
-
Commit a new mirroring layer by performing something similar to docker Commit
-
Docker then runs a new container based on the image it just submitted
-
Execute the next instruction in the dockerfile (that is, repeat step 234) until all instructions have been executed
-
conclusion
-
From the perspective of application software, DockerFile, Docker image and Docker container represent three different stages of software respectively
-
DockerFile is the raw material of software. It defines everything a process needs, including executing code or files, environment variables, dependency packages, runtime environments, dynamic link libraries, operating system distributions, service processes and kernel processes (when an application process needs to interact with system services and kernel processes, This is to consider how to design permission control for namespace) and so on
-
Docker images are deliverables of software
-
Docker container can be considered as the running state of software, which provides services directly
-
DockerFile is development oriented, Docker image becomes the delivery standard, and Docker container involves deployment and operation and maintenance. The three are indispensable, and together act as the cornerstone of Docker system
-
DockerFile Architecture (Reserved word instructions)
-
FROM: Base mirror, which mirror the current new mirror is based on
-
MAINTAINER: Specifies the name and email address of the image MAINTAINER
-
RUN: the command to RUN when the container is built
-
EXPOSE: Indicates the port that the current container exposes to the public
-
WORKDIR: specifies the default working directory that the terminal logs in to after the container is created
-
ENV: Used to set environment variables during image building
-
ENV MY_PATH /usr/mytest is an environment variable that can be used in any subsequent RUN directives, just as the environment variable prefix is specified in the command
-
You can also use these environment variables directly in other directives
-
Example: WORKDIR $MY_PATH
-
ADD: Copies files in the host directory to the image and the ADD command automatically processes the URL and decompresses the tar package
-
COPY: copies files and directories to an image
-
COPY src dest
-
COPY [“src”,”dest”]
-
VOLUME: container data VOLUME used for data storage and persistence
-
CMD: specifies the command to run when a container is started. DockerFile can have multiple CMD commands, but only the last one takes effect, and CMD is replaced by the argument after docker run
-
Shell format: CMD < command >
-
Exec format: CMD [” executable “,” parameter 1″,” parameter 2″…
-
Parameter list format: CMD [” Parameter 1″,” Parameter 2″… After specifying the ENTRYPOINT directive, use CMD to specify the specific parameters
-
ENTRYPOINT: Specifies the command to run when a container is started. The purpose of ENTRYPOINT is the same as that of CMD
-
ONBUILD: the parent image’s ONBUILD is triggered when the quilt inherits (at build time, after FROM runs)
-
Custom CentOS
-
The default path for CentOS to run is /. Vim is not supported by default. Ifconfig is not supported by default
-
Customize MyCentOS so that it has the default path after login; Vim editor; View the network configuration ifconfig
FROM centos MAINTAINER [email protected] ENV mypath /usr/local WORKDIR $mypath EXPOSE 80 RUN yum -y install vim RUN yum -y install net-tools CMD /bin/bash
-
Run PWD, vim, and ifconfig to find that the customization succeeds
-
Docker history Container ID: View the change history of the image
-
CMD is different from ENTRYPOINT
-
CMD [“catalina.sh”, “run”], CMD [“catalina.sh”, “run”] is not executed, tomcat is not run
-
Unlike ENTRYPOINT, parameters after docker run are passed as parameters to ENTRYPOINT (append), which then forms a new command combination
-
When both CMD and ENTRYPOINT exist, CMD directives become arguments to ENTRYPOINT, and arguments provided by CMD are overwritten by commands following Docker run
-
Custom tomcat
-
mkdir -p /zzyyuse/mydockerfile/tomcat9
-
Touch C.txt in the above directory
-
Copy the JDK and Tomcat installation packages to the previous directory
-
Cp/root/apache tomcat – 9.0.29. Tar. Gz.
-
cp /root/jdk-8u144-linux-x64.tar.gz .
-
In/zzyyuse/mydockerfile/new DockerFile tomcat9 directory files, content is as follows
The FROM centos MAINTAINER [email protected] # the hosting c.t xt file COPY of the current directory to the container/usr/local directory COPY c.t xt/usr/local/cincontainer. TXT ADD JDK -8u144-linux-x64.tar.gz /usr/local/ ADD apache-tomcat-9.0.29.tar.gz /usr/local/ # yum -y install vim # yum -y install vim # yum -y install vim # yum -y install vim # yum -y install / usr/local/jdk1.8.0 _144 ENV CLASSPATH JAVA_HOME/lib/dt. The jar: JAVAHOME/lib/view jarENVCATALINAHOME/usr/local/apache – tomcat – 9.0.29 ENVCATALINABASE/usr/local/apache – tomcat – 9.0.29 ENVPAT Jar ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.29 ENV CATALINA_BASE / usr/local/apache tomcat – 9.0.29 ENV PATH JAVAHOME/lib/view jarENVCATALINAHOME/usr/local/apache – tomcat – 9.0.29 ENVCATALINABASE/usr/local/apache – tomcat – 9.0.29 ENVPAT HPATH: JAVAHOME/bin: JAVA_HOME/bin: JAVAHOME/bin: the CATALINA_HOME/lib: $the CATALINA_HOME/bin listening port EXPOSE 8080 # # container runtime startup tomcat
ENTRYPOINT [“/usr/local/apache tomcat – 9.0.29 / bin/startup. Sh “]
CMD [“/usr/local/apache tomcat – 9.0.29 / bin/catalina. Sh “, “run”]
CMD/usr/local/apache tomcat – 9.0.29 / bin/startup. Sh && tail -f/usr/local/apache tomcat – 9.0.29 / bin/logs/catalina. Out
-
Build: docker build -f DockerFile -t tomcat9.
-
Run, two data volumes to view Tomcat tests and logs
docker run -d -p 9080:8080 –name myt9 -v / zzyyuse/mydockerfile/tomcat9 / test: / usr/local/apache tomcat – 9.0.29 / webapps/test – v / zzyyuse/mydockerfile/tomcat9 tomcat9logs / : / usr/local/apache tomcat – 9.0.29 / logs – ring = true tomcat9
-
Check the validation
-
Docker exec -t myt9 CD apache-tomcat-9.0.29/logs && ls -l docker exec -t myt9 CD apache-tomcat-9.0.29/logs && ls -l
-
Docker exec -t myt9 PWD
-
Docker exec -t myt9 ls -l docker exec -t myt9 ls -l
-
Use Firefox to access 127.0.0.1:9080 and check whether Tomcat is started successfully
-
You can write a JSP test in the Test folder
Commonly used to install
-
The overall steps
-
Search the mirror
-
Pull the mirror
-
Look at mirror
-
Boot image
-
Stop the container
-
Remove the container
-
Install tomcat
-
Docker Hub —-docker Search Tomcat
-
Pull the Tomcat image from the Docker Hub to the local —- Docker pull Tomcat
-
Check to see if tomcat—- Docker images are pulled
-
—-docker run-it -p 9080:8080 tomcat
-
MySQL installation
-
—-docker search mysql
-
Download mysql image from docker hub to local TAB :5.6 —-docker pull mysql:5.6
-
Using mysql5.6 mirror create container — docker run – p 12345:3306 – name/zzyyuse/mysql/mysql – v conf: / etc/mysql/conf. D – v / zzyyuse/mysql/logs: / logs – v/zzyyuse/mysql/data: / var/lib/mysql – e MYSQL_ROOT_PASSWORD = 123456 – d mysql: 5.6
-
Docker run -p 12345:3306 Port mapping
-
–name Mysql named container
-
– v/zzyyuse/mysql/conf: / etc/mysql/conf. D mysql configuration
-
– v/zzyyuse/mysql/logs: / logs mysql logs
-
– v/zzyyuse/mysql/data: / var/lib/mysql mysql data
-
-e MYSQL_ROOT_PASSWORD=123456 Root user password
-
-d mysql:5.6 Background running
-
The container is running —- Docker PS
-
Go to —-docker exec-it container ID /bin/bash
-
Log in to mysql—- as user root mysql -uroot -p press Enter and enter the password 123456
-
Build databases, build tables and so on
-
create database db01;
-
use db01;
-
create table t_boot(id int not null primary key,bookname varchar(20));
-
show tables;
-
—-docker exec mysql database sh -c ‘exec mysqldump –all-databases -uroot -p”123456″‘ > /zzyyuse/all-database.sql
-
Install Redis
-
Pull redis image from Docker Hub (Ali Cloud Accelerator) to local, labeled 3.2—-docker pull Redis :3.2
-
Create a container — docker run – p, 6379:6379 – v/zzyyuse/myredis/data: / data – v / zzyyuse myredis/conf/redis. Conf: / usr/local/etc/redis/redis conf – d redis: 3.2 redis server /usr/local/etc/redis/redis.conf –appendonly yes
-
Docker run -p 6379:6379 Port mapping
-
– v/zzyyuse/myredis/data: / data redis data
-
– v/zzyyuse/myredis/conf/redis. Conf: / usr/local/etc/redis/redis conf redis configuration (this is not a file, folder)
-
-d Redis :3.2 Background Running
-
Redis server/usr/local/etc/redis/redis conf – appendonly yes CMD command (- open aof appendonly yes, data persistence)
-
In the host/zzyyuse/myredis/conf/redis. The conf directory new redis. Conf file – vim/zzyyuse myredis/conf/redis. Conf/redis. Conf, Copy the redis configuration file to this file (comment out bind as there is no such thing as bind)
-
Connect to redis—-docker exec it Redis service container ID redis-cli
-
Add data set k1 v1
-
Test persistence files are generated – cat/zzyyuse myredis/data/appendonly aof
Local image is published to Ali Cloud
-
Image generation method
-
Using DockerFile
-
Create a new image from the container —-docker commit [OPTIONS] Container ID REPOSITORY[:TAG]
-
Push the local image to Ali Cloud
-
Prepare the local image material prototype
-
Ali cloud developer platform, login after the selection of container image service
-
Creating a namespace
-
Creating a Mirror repository
-
Click the administration button to the right of the repository you created
-
The new page will have instructions for pushing the image to Registry
-
$sudo Docker login –username=XXX registry.cn-hangzhou.aliyuncs.com
-
The second step annotation will upload the image version information: $sudo docker tag (ImageId) registry.cn-hangzhou.aliyuncs.com/xxx/my-images: [the mirror version number]
-
The third step mirror push containers to ali cloud image service: $sudo docker push registry.cn-hangzhou.aliyuncs.com/xxx/my-images: [the mirror version number]
-
Download the image from Ali Cloud to local
-
$sudo docker pull registry.cn-hangzhou.aliyuncs.com/xxx/my-images: [the mirror version number]
-
Search: namespace/mirror repository :TAG
Building a private repository
-
Start Docker Registry and use the Registry image officially provided by Docker to build a local private image warehouse. Specific instructions are as follows:
docker run -d -p 5000:5000 –restart=always –name registry
-v /mnt/registry:/var/lib/registry registry:2 -
Docker tag Hello-world :latest localhost:5000/my-hello-world
-
Docker push localhost:5000/my-hello-world
-
Check local repository mirror: http://localhost:5000/v2/my-hello-world/tags/list
-
It can be viewed in the/MNT /registry directory
-
Configure private warehouse authentication
-
Check ifconfig for the server where the Docker Registry (DR for short) resides
-
Generate a self-signed certificate (execute the following instructions in the home directory). To ensure the security of DR, a security certificate is also needed to ensure that other Docker machines cannot access DR at will. Therefore, self-signed certificates should be generated on the Docker host of DR (there is no need to generate certificates if the certificates have been purchased). Specific operation instructions are as follows
mkdir registry && cd registry && mkdir certs
&& cd certs openssl req -x509 -days 3650 -subj ‘/CN=ip:port/’ -nodes
-newkey rsa:2048 -keyout domain.key -out domain.crt -
-days 3650 indicates the validity period of the certificate. IP :port indicates the address of the DR. Rsa :2048 indicates the algorithm length of the certificate
-
Generate username and password
cd .. && mkdir auth docker run –entrypoint htpasswd registry:2 -bbn account password > auth/htpasswd
-
Start the DR
docker run -d -p 5000:5000 –restart=always –name registry
-v /mnt/registry:/var/lib/registry -vpwd
/auth:/auth
-e “REGISTRY_AUTH=htpasswd”
-e “REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm”
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
-vpwd
/certs:/certs
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key
registry:2 -
Configure the DR access interface
mkdir -p /etc/docker/certs.d/ip:port cp certs/domain.crt /etc/docker/certs.d/ip:port
-
Register the use of DR private warehouse
-
Edit daemon.json file vim /etc/docker-daemon. json
-
Add {” insecure – registries: “[]” IP: port “}
-
Restart and load the Docker configuration file
-
systemctl reload docker
-
systemctl restart docker
Docker network management
Although the default network provided by Docker is relatively simple to use, in order to ensure the security of applications in each container, it is recommended to use a customized network for container management in actual development. In Docker, bridge network, overlay network can be customized, network plugin or remote network can be created to achieve complete customization and control of container network.
-
Bridge Networks (Bridge networks, default networks) : Bridge-driven custom networks can achieve container isolation. Suitable for small single-host network environment management, but for large network environment management (such as clusters), you need to consider using custom overlay cluster.
-
docker network create –driver bridge isolated_nw: Create a network based on bridge driver name isolated_NW, –driver can be substituted -d, –driver bridge can be omitted, docker default bridge
-
Docker run-itd –name=nwtest –network= isolated_NW busybox: specify network
-
Docker inspect nwtest: inspect network configuration
-
Docker Network Connect Bridge nwtest: Add a new network to the container
-
Docker network disconnect isolated_NW nwtest: Disconnects the network of the container
-
Docker network RM isolATED_NW: Remove the custom network named isolated_NW
-
Docker network ls: View the network
-
Swarm overlays network in swarm mode to ensure security, swarm cluster makes custom Overlay network only applicable to nodes in the cluster that need service, not to external services or Docker hosts.
-
Custom Network plugins: If none of the previous Custom networks can meet the requirements, you can use the plugins provided by Docker to customize the network driver plugins. The custom network plug-in runs as another process on the host where the Docker process is running. Custom network-driven plug-ins follow the same restrictions and installation rules as other plug-ins, all use the Plug-in API provided by Docker, and have a life cycle that includes installation, start, stop, and activation.
-
Network communication between containers
-
Docker network inspect Network name
-
Create two containers that use the default Bridge network
docker run -itd –name=c1 busybox docker run -itd –name=c2 busybox
-
Create a container that uses the custom IsolATED_NW network
docker network create –driver bridge isolated_nw docker run -itd –network=isolated_nw –name=c3 busybox
-
Add a custom isolATED_NW network connection for container2 containers
docker network connect isolated_nw c2
-
To check whether C1, C2, and C3 can communicate with each other, run ping -w 4 IP
-
It is clear that C1 and C2, c2 and C3 can communicate, c1 and C3 cannot communicate (c1 also cannot ping c2 172.18.0.3).
Build Docker Swarm cluster
By aggregating multiple Docker engines together, a large Docker-engine is formed to provide external container cluster services. Docker Engine: Docker Engine: Docker Engine: Docker Engine: Docker Engine Swarm is just a Scheduler and router. Swarm doesn’t run the container itself, it just receives requests from docker clients and schedules the appropriate nodes to run the container, which means that even if Swarm fails for some reason, The nodes in the cluster will also run as usual, and when Swarm is up and running again, it will gather information to rebuild the cluster.
-
Docker swarm characteristics
-
External Docker API interface
-
Swarm itself focuses on Docker cluster management and is very lightweight with a low resource footprint
-
Swarm is currently released alongside Docker
-
architecture
- Procedure for Setting up a cluster
Swarm daemons do not use a swarm to add docker nodes to a swarm. Docker does not use a swarm to add dockers to a swarm
If you cannot join the firewall, disable systemctl stop Firewalld
Docker Compose service orchestration
-
The complete environment of a project includes application node (APP), database (MySQL), cache (Redis). Can all of these nodes be built and run in a container for ease of management? The answer is yes, but it is not recommended, because it violates the design of isolation of the runtime environment in Docker (each node has its runtime environment, and if combined, naturally their environment must be twisted together). Moreover, the number of services in the microservices architecture is too large to be started one by one by command.
-
So how to simplify the maintenance of multi-node project environment? Docker-compose can solve this problem. Dockercomposing. Yml describes the node container information in the project, as well as the dependency information. Then a one-click build or launch with Docker-compose. Docker-comemage.yml sample file: docker-comemage.yml
Image: nginx[:TAG] image: nginx[:TAG] port: Run the following volumes to add volumes to your app: “80:80” links to your app container. Nginx needs to know the IP address and port of the service to add volumes to its own directory. Directory mount – “. / nginx. Conf., d/a: / etc/nginx/conf. D/” app: image: luban/app
-
Install the docker – compose
-
Check out the Compose version github.com/docker/comp…
-
For Compose, please install Docker: curl -l github.com/docker/comp… -s`-`uname -m` -o /usr/local/bin/docker-compose
-
Grant execution permission to chmod +x /usr/local/bin/docker-compose
-
Check installation: docker-comement-v
-
Docker-compose: ln -s docker-compose dc: ln -s docker-compose dc: ln -s docker-compose DC
-
Uncompose docker compose: rm /usr/local/bin/docker-compose
-
Service Choreography step: Three steps (executed in an empty directory created)
-
Write a DockerFile (build images for each service to facilitate migration —- is not required)
-
Compose docker-comemage. yml file (compose deployment service instructions)
-
Run docker-compose up (start the service in the YML file)
-
The sample
-
Preparation: two images (not built from DockerFile)
Docker pull mysql:5.7 Docker pull wordpress
-
Create a new empty directory, create a new docker-comemess. yml file, and edit it:
Version: ‘3’ services: db: image: mysql:5.7 volumes: -db_data :/var/lib/mysql restart: always environment: MYSQL_ROOT_PASSWORD: wordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: – db image: wordpress:latest ports: – “8001:80” restart: always environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpress volumes: db_data:
The contents of this file, new DB and wordpress containers, are equivalent to:
docker run --name db -e MYSQL_ROOT_PASSWORD=123456 -d mysql
docker run --name some-wordpress --link db:mysql -p 8001:80 -d wordpress
Copy the code
-
Compose: docker-compose up
-
Browser access: http://ip:8001 (wordpress is a blogging platform developed using PHP language and can also be used as a content management system CMS)
Docker visual interface: Portainer
Docker visualization tools: Docker UI (only connected to local server), shipyard (stopped maintenance), Portainer, daoCloud (charged)
-
Portainer is a graphical management tool of Docker, providing status display panel, rapid deployment of application templates, and basic operations of container mirroring network data volumes (including uploading and downloading images, Swarm cluster and service centralized management and operation, login user management and control and other functions. The function is very comprehensive, can basically meet all the needs of small and medium-sized units for container management.
-
Download Portainer image
-
Query docker search portainer
-
docker pull portainer/portainer
-
The standalone version runs —- with only one Docker host
Docker run 9000 – d – p – 9000 – port mapping restart = always startup – v/var/run/docker. The sock: / var/run/docker. The sock container volume — the name Portainer – named portainer/portainer test
Select Local for standalone
-
The cluster running
-
In the case of multiple Dockers, cluster management is very important, Portainer also supports cluster management, Portainer can work with Swarm to manage cluster operations, refer to the above.
docker run -d -p 9000:9000 –restart=always –name portainer-test portainer/portainer
Select Remote to run the cluster, enter swarm’s IP, and click Connect. After successful login, the following is displayed
Docker nodes can be added to Endpoints
Then switch nodes in the following way. Double-click to switch nodes for management.
- First login needs to register user, set password for admin user (portainer)