Docker introduction
What is a docker
- Docker is an open source application container engine developed by Go language
- Divided into CE (Community Edition) and EE (Enterprise Edition)
- How Docker differs from traditional virtualization. The traditional virtual machine technology is to create a set of virtual hardware, run a complete operating system on it, and then run required application processes on the system. The application process in the container runs directly on the host kernel, without its own kernel and without hardware virtualization. Therefore, containers are much lighter than traditional virtual machines
Why docker
More efficient use of system resources
- Docker has a higher utilization of system resources because the container does not require additional overhead such as hardware virtualization and running a full operating system. Whether it is application execution speed, memory consumption or file storage speed, it is more efficient than traditional VIRTUAL machine technology. Therefore, a host with the same configuration can often run more applications than virtual machine technology
Faster startup time
- Traditional VIRTUAL machine technology usually takes several minutes to start application services, while Docker container applications can be started in seconds or even milliseconds because they run directly in the host kernel and do not need to start the complete operating system. Greatly saving the development, testing, deployment time.
Consistent operating environment
- A common problem in development is environmental consistency. Because the development environment, test environment, and production environment are inconsistent, some bugs are not found in the development process. The image of Docker provides a complete runtime environment in addition to the kernel, ensuring the consistency of the application running environment, so that there will no longer be “no problem with this code on my machine”.
Continuous delivery and deployment
- The most desirable thing for development and operations (DevOps) people is a single build or configuration that can run anywhere.
- With Docker, continuous integration, continuous delivery and deployment can be achieved by customizing application images. Developers can use Dockerfile for image building and Integration testing with Continuous Integration systems, while operations can quickly deploy the image directly into a production environment. Even automatic Deployment with Continuous Delivery/Deployment systems
Easier migration
- Docker ensures the consistency of execution environment, making application migration easier. Docker can run on many platforms, whether physical machine, virtual machine, public cloud, private cloud, or even laptop, and its running results are consistent. Therefore, users can easily migrate an application running on one platform to another without worrying that the application will not run properly due to the change of the operating environment.
Easier maintenance and extension
- Docker uses layered storage and image technology, which makes it easier to reuse the repeated parts of the application, easier to maintain and update the application, and it is also very simple to further expand the image based on the basic image. In addition, Docker team maintains a large number of high-quality official images together with various open source project teams, which can be used directly in the production environment and further customized as a basis, greatly reducing the cost of image production of application services.
Compare with traditional virtual machines
features | The container | The virtual machine |
---|---|---|
Start the | Second level | Minutes of class |
The hard disk to use | Generally for MB | Generally for GB |
performance | Close to the native | Weaker than |
System support | Supports thousands of containers on a single machine | Usually dozens |
The structure of the docker
Docker uses a client-server (C/S) architecture pattern that uses remote apis to manage and create Docker containers.
Docker containers are created by Docker images.
The relationship between containers and images is similar to that between objects and classes in object-oriented programming.
Docker | object-oriented |
---|---|
The container | object |
The mirror | class |
- Description of nouns in a schema
noun | instructions |
---|---|
Mirror image (Images) | Docker images are templates used to create Docker containers. |
The Container (the Container) | A container is a single application or group of applications that run independently. |
The Client (Client) | Docker clients use the Docker API through the command line or other tools.Docs.docker.com/reference/a…Communicates with the Docker daemon. |
The Host (Host) | A physical or virtual machine for executing Docker daemons and containers. |
Warehouse (Registry) | Docker repository is used to store images, which can be understood as code repository in code control. Docker Hub(hub.docker.com) provides a large set of mirrors for use. |
Docker Machine | Docker Machine is a command line tool to simplify the installation of Docker, through a simple command line can be installed on the corresponding platform Docker, such as VirtualBox, Digital Ocean, Microsoft Azure. |
The installation of a docker
Install using a script
Executing the Installation Script
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
Copy the code
Start the Docker process
sudo systemctl enable docker
sudo systemctl start docker
Copy the code
Configure the image accelerator for Docker
- in
/etc/docker/daemon.json
(If the file does not exist, please create it.)
{
"registry-mirrors": [
"https://registry.docker-cn.com"]}Copy the code
- Restarting the service
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
Copy the code
- Check whether the accelerator takes effect If the image is still pulled slowly after the accelerator is configured, manually check whether the accelerator configuration takes effect. Run the docker info command. If the following information is displayed, the configuration is successful.
Registry Mirrors:
https://registry.docker-cn.com/
Copy the code
The docker unload
- Query the installed packages first
yum list installed | grep docker
Copy the code
- Remove docker-related packages
Yum -y remove the package listed aboveCopy the code
Use of mirror images
Access to the mirror
As mentioned earlier, Docker Hub has a large number of high-quality images available, so here’s how to get them.
The command to get the image from the Docker repository is Docker pull. The command format is as follows:
Docker pull [options] [Docker Registry address [: port number]/] repository name [: tag]
The options can be seen with the docker pull –help command. Here we describe the format of the image name.
- Docker image repository address: the address format is usually < domain name /IP>[: port number]. The default address is Docker Hub.
- Warehouse name: As mentioned earlier, the warehouse name here is a two-paragraph name, namely < username >/< software name >. For Docker Hub, if the user name is not given, the default is Library, which is the official image.
Docker pull Ubuntu :16.04 16.04: Pulling from Library/Ubuntu BF5d46315322: Pull complete 9f13e0AC480c: Pull complete e8988b5b3097: Pull complete 40af181810e7: Pull complete e6f7c7e5c03e: Pull complete Digest: sha256:147913621d9cdea08853f6ba9116c2e27a3ceffecf3b492983ae97c3d643fbbe Status: Downloaded newer imageforUbuntu: 16.04Copy the code
The Docker image repository address is not given in the command above, so the image will be fetched from the Docker Hub. The image name is Ubuntu :16.04, so the image labeled 16.04 in the official image Library/Ubuntu repository will be retrieved.
As you can see from the download, the image is made up of multiple layers of storage. Downloading is done in layers, not in a single file. The first 12 bits of the ID for each layer are given during the download. At the end of the download, a complete sha256 summary of the image is given to ensure the consistency of the download.
When using the command above, you may notice that the layer ID you see and the summary for SHA256 are different from the ones here. This is because the official image is always maintained, and any new bugs or version updates are fixed and released under the original label, which ensures that anyone using this label can get a more secure and stable image.
List the mirror
Image list
To list the images that have been downloaded, there are two commands.
docker images
Copy the code
or
docker image ls
Copy the code
The results are as follows
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu 16.04 2a697363a870 38 hours ago 119MB
tomcat latest 27600aa3d7f1 8 days ago 463MB
ubuntu <none> a3551444fc85 2 weeks ago 119MB
Copy the code
The list contains the repository name, label, mirror ID, creation time, and occupied space.
The volume of the mirror
If you look closely, you’ll notice that the size of the space represented here is different from the size of the image seen on the Docker Hub. For example, ubuntu:16.04 image size, which is 127 MB in this case, is 50 MB in Docker Hub. This is because the volume displayed in the Docker Hub is compressed. In the process of image downloading and uploading, the image is kept compressed, so the size displayed by Docker Hub is the traffic size that is more concerned in network transmission. While Docker Image LS displays the expanded size of the image after downloading it to the local. To be precise, it is the sum of the space occupied by all layers after expanding, because when viewing the space after the image is downloaded to the local, we are more concerned about the space occupied by the local disk.
Another problem that needs to be noted is that the total volume of images in the Docker Image LS list is not the actual disk consumption of all images. Because Docker images are multi-tier storage structures that can be inherited and reused, different images may share a common layer because they use the same basic image. Since Docker uses Union FS, only one copy of the same layer is required, so the actual mirror disk footprint is likely to be much smaller than the sum of the mirror sizes in this list.
You can run the following command to view the space occupied by mirrors, containers, and data volumes.
Docker system DF TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 3 1 700.2MB 237.5MB (33%) Containers 2 0 65.95kB 65.95kB (100%) Local Volumes 0 0 0B 0B Build Cache 0 0 0B 0BCopy the code
Mere illusion mirror
In the list of images above, you can also see a special image labeled None
ubuntu <none> a3551444fc85 2 weeks ago 119MB
Copy the code
This image originally had an image name and label. When the new version of the image was released and docker pulled again, the image name was transferred to the newly downloaded image, while the name on the old image was cancelled, thus becoming. In addition to docker pulls, docker builds can also cause this. Because the old and new images have the same name, the old image name is cancelled, resulting in an image with both the warehouse name and the label. This type of untagged image is also known as a dangling image and can be displayed specifically by using the following command:
docker images -f dangling=true
Copy the code
Interlayer image
To speed up image building and reuse resources, Docker uses mid-tier images. So after a while of use, you might see some dependent mid-tier mirrors. The default docker image ls list displays only top-level images. If you want to display all images, including middle-layer images, you need to add the -a parameter.
docker images -a
Copy the code
List partial mirrors
Docker Images lists all top-level images without any arguments, but sometimes we only want to list some. Docker Image LS has several parameters to help with this.
- Mirror according to the warehouse name
Docker Images Ubuntu Ubuntu 16.04 2a697363a870 38 hours ago 119MB Ubuntu < None > a3551444fc85 2 weeks ago 119MB
Copy the code
- List a specific image, that is, specify the repository name and label
Docker Images Ubuntu :16.04 REPOSITORY TAG IMAGE ID CREATED SIZE Ubuntu 16.04 2a697363a870 38 hours ago 119MBCopy the code
Remove the mirror
To delete a local image, run the docker image rm command in the following format:
- Deleting a Mirror
Docker image rm [options] < image 1> [< image 2>...Copy the code
or
Docker rmI [options] < mirror 1> [< mirror 2>...]Copy the code
- Deleting All Mirrors
docker image prune
Copy the code
< mirror > can be the short ID of the mirror, the long ID of the mirror, the name of the mirror, or the summary of the mirror. <> indicates mandatory parameters, and [] indicates optional parameters
Custom image
Dockerfile custom image
Dockerfile is a text file containing instructions, each Instruction builds a layer, so the content of each Instruction describes how the layer should be built.
In this example, we use Dockerfile to customize our Own Tomcat image
- Run a Tomcat container
docker run -p 8080:8080 tomcat
Copy the code
- A new XShell window opens to interactively access the container you just started
docker exec -it f208e826caf2 bash
Copy the code
- Change the index.jsp in the /webapps/Root directory and append a paragraph
cd webapps/ROOT
echo "hello docker tomcat" >> index.jsp
Copy the code
Above we made changes to the original Tomcat container, but did not change the image. As a result, every time we started a container, we had to enter the container to make changes. If you modify the image directly, then running containers no longer need to be modified.
Now create a Dockerfile file to build our own image
cd /usr/local
mkdir -p docker/mytomcat
cd docker/mytomcat
Copy the code
- Write a Dockerfile script to build the image
vi Dockerfile
Copy the code
- Write a script
FROM tomcat
WORKDIR /usr/local/tomcat/webapps/ROOT/
RUN rm -rf *
RUN echo "hello docker tomcat" > /usr/local/tomcat/webapps/ROOT/index.html
Copy the code
FROM tomcat: Selected base image for tomcat WORKDIR/usr/local/tomcat/webapps/ROOT / : specify the working directory, RUN into the directory rm – rf * : Delete all files in the current directory RUN echo “hello docker tomcat” > / usr/local/tomcat/webapps/ROOT/index. HTML: write a index. The HTML file to the current directory
- Build the image and go to the directory where the Dockerfile resides
docker build -t mytomcat .
Copy the code
Mytomcat is the tag name of the image you want to build. Represents the current directory.
Ok, now we have built our own Tomcat image. To run
- Running custom images
docker run -p 8080:8080 mytomcat
Copy the code
Dockerfile command description
FROM
- Function to specify the base image and must be the first instruction. FROM scratch. It also means that the next instructions to be written will start as the first layer of the mirror
RUN
Function to run the specified command
The RUN command has two formats
- RUN
- The first type of RUN [“executable”, “param1”, “param2”] is followed directly by shell commands
On Linux, the default value is /bin/sh -c
The default on Windows is CMD /S /C
The second is something like a function call.
Executable can be understood as an executable, followed by two parameters.
Comparison of the two writing methods:
RUN /bin/bash -c ‘source HOME RUN [“/bin/bash”, “-c”, “echo hello”]
The number of runs builds as many layers of images as possible, resulting in bloated, multi-layered images that not only increase the deployment time of components, but also make them prone to errors.
The newline character for RUN is \
COPY
Another copy command, as the name suggests
The syntax is as follows:
- COPY …
- COPY [“”,… “”
COPY can only be a local file. Other uses are the same
ADD
A copy command to copy files into the scene.
If you think of the virtual machine and container as two Linux servers, this command is similar to SCP, except that SCP requires user name and password authentication, while ADD does not.
The syntax is as follows:
- ADD …
- ADD [“”,… “”]
The path can be an absolute path within the container or a relative path to the working directory
It can be a local file or a local zip file, or it can be a URL
If written as a URL, ADD is similar to the wget command
If the file is a tar.gz package, it will be automatically decompressed after being sent.
Use of containers
Start the container
Simple to start
docker run -p 8081:8080 tomcat
Copy the code
-
Run: Starts a container. Starting a container starts a process
-
-p: Specifies the port. The first parameter is the host port and the second parameter is the port of the Docker container. Port 8081 is mapped to port 8080 in the container. We can access port 8081 directly from the host
-
Tomcat: mirror
Daemon start
That is, running in the background, does not occupy the main thread, will not be stuck in the main thread log.
docker run -d -p 8080:8080 tomcat
Copy the code
- -d: starts the container in the daemon state
After this mode is started, only a complete container Id is returned. To view logs generated during startup, run the following command:
Docker Container Logs Container IdCopy the code
Enter the started container
docker exec-it container Id bashCopy the code
- The exec: enter
- -it: indicates interactive mode
- Bash: the command line
Once in the container, some Linux commands, such as ll and vi, cannot be used because tomcat is an image built on the simplest Linux.
List the container
docker container ls -a
Copy the code
or
docker ps -a
Copy the code
Termination of the container
docker container stop
Copy the code
Remove the container
Docker Container RM Container IdCopy the code
or
docker container prune
Copy the code
Delete all terminated containers
docker container prune
Copy the code
Container data persistence
When a container is destroyed, the data inside the container is lost. How to read between the data in the container and the host computer, and make the data in the container persistent? This is the persistence of the container data, which needs to use the data volume of docker.
A data volume is a special directory that can be used by one or more containers, bypassing UFS and providing a number of useful features:
- Data volumes can be shared and reused between containers
- Changes to data volumes take effect immediately
- Data volume updates do not affect mirroring
- The data volume will always exist by default, even if the container is deleted
Let’s run a Tomcat container and specify the ROOT directory as the directory on the host
- Create a new directory under the root directory of the host
ROOT
mkdir ROOT
- Create an index.html file in the ROOT directory
cd /ROOT
vi index.html
Copy the code
The content is Hello I am domain, this is index.html in volume
- Start a container to mount data volumes
docker run -d -p 8081:8080 -v /root/ROOT:/usr/local/tomcat/webapps/ROOT tomcat
Copy the code
-v: The first parameter indicates the directory of the host. The second parameter indicates the directory in the container, which means to replace the ROOT directory in the container with the ROOT directory in the host
If you visit port 8081 in your browser, you can see the contents of your host’s index.html.
You can view information about data volumes in a container:
Docker inspect Container IdCopy the code
Data volume information is under the Mounts Key
Container deploys mysql database
- Pull mysql image
If no label is specified, the latest mysql image is pulled from the official website by default
docker pull mysql
Copy the code
- Start the container
Start container with data volume
docker run -p 3306:3306 --name mysql \
-v /usr/local/docker/mysql/logs:/var/log/mysql \
-v /usr/local/docker/mysql/data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=123456 \
-d mysql
Copy the code
The following is the meaning of each parameter – v/usr/local/docker/mysql/logs: / var/log/mysql: mount log files and directories on the left side of the host machine, on the right side of the container directory. – v/usr/local/docker/mysql/data: / var/lib/mysql: mount data files. -e MYSQL_ROOT_PASSWORD=123456: Use environment variables to set the password of user root
docker-compose
The Compose project is Docker’s official open source project, which is responsible for the rapid choreography of Docker container clusters.
Compose has two important concepts:
Service: A container for an application that can actually contain several instances of the container running the same image. Project: A complete business unit consisting of a set of associated application containers, defined in the docker-comemess.yml file.
Compose’s default management object is a project, which provides easy lifecycle management through subcommands for a set of containers in a project.
Docker-compose installation uninstall
Binary packages are used for installation and uninstallation
The installation
Download and install the docker-compose command from Github
The curl -l https://github.com/docker/compose/releases/download/1.17.1/docker-compose- ` uname-s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
Copy the code
uninstall
Delete the binary file.
rm /usr/local/bin/docker-compose
Copy the code
Docker – compose
Docker-compose deployment project
Docker-compose is used to start a project. To start the project, you need to start two containers, one tomcat and one mysql. This is the service component project mentioned earlier.
Docker-compose is built from the docker-comemage. yml file.
- Create and write one in the deployment directory
docker-compose.yml
file
cd /usr/local/docker
mkdir myshop
cd myshop
vi docker-compose.yml
version: '3'
services:
tomcat:
restart: always
image: 'tomcat'
container_name: tomcat
ports:
- 8080:8080
volumes:
- /usr/local/docker/myshop/ROOT:/usr/local/tomcat/webapps/ROOT
mysql:
restart: always
image: mysql
container_name: mysql
ports:
- 3306:3306
environment:
TZ: Asia/Shanghai
MYSQL_ROOT_PASSWORD: 123456
command:
--character-set-server=utf8mb4
--collation-server=utf8mb4_general_ci
--explicit_defaults_for_timestamp=true
--lower_case_table_names=1
--max_allowed_packet=128M
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
Copy the code
Two services are configured here, and both of them use data volumes. In Tomcat, data volumes are used in a common format: host directory: container directory. In mysql, data volumes are managed in a unified way, and host directories assigned by Docker are used instead of being specified by themselves. Here’s what it means in the configuration:
Run the following command to add volumes to the host directory: -mysql-data :/var/lib/mysql
volumes:
mysql-data:
Copy the code
- Docker-compose starts the project at the current time
docker-compose.yml
Directory where the file resides, run the start command
docker-compose up -d
Copy the code
-d indicates that the operating system runs in daemon mode and does not occupy the main thread of the operating system or output logs.
- Docker-compose stops the project at the current time
docker-compose.yml
Directory where the file resides, run the start command
docker-compose down
Copy the code
- Docker-compose views logs
So how do I view logs after running in daemon mode? Run the following command:
docker-compose logs
Copy the code
Mysql docker – compose deployment
version: '3.1'
services:
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: 123456
command:
--default-authentication-plugin=mysql_native_password
--character-set-server=utf8mb4
--collation-server=utf8mb4_general_ci
--explicit_defaults_for_timestamp=true
--lower_case_table_names=1
ports:
- 3306:3306
volumes:
- ./data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
Copy the code
Install GitLab docker – compose
Using Docker-compos to deploy a Git hosting platform is very convenient.
Create a docker-comemage. yml configuration file in /usr/local/docker-gitlab as follows:
version: '3'
services:
web:
image: 'twang2218/gitlab-ce-zh'
restart: always
hostname: '192.168.65.130'
environment:
TZ: 'Asia/Shanghai'
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://192.168.65.130:8080'
gitlab_rails['gitlab_shell_ssh_port'] = 2222
unicorn['port'] = 8888
nginx['listen_port'] = 8080
ports:
- '8080:8080'
- '8443:443'
- '2222:22'
volumes:
- /usr/local/docker/gitlab/config:/etc/gitlab
- /usr/local/docker/gitlab/data:/var/opt/gitlab
- /usr/local/docker/gitlab/logs:/var/log/gitlab
Copy the code
Here is what each node attribute means:
- Restart: Restarts the container with each startup
- Hostname: indicates the hostname, which is the IP address of the current host
- Environment: Environment variable used to set some initialized data
- TZ: time zone
- GITLAB_OMNIBUS_CONFIG: Some configurations for Gitlab
- External_url: specifies the external access address
- Gitlab_rails: Git SSH port
- Unicorn: internal port
- Nginx: Nginx listening port, which must be the same as the external access port above
- Ports: Mapping ports between host and container, left host port, right container port
- Volumes: data volume directory (host directory on the left and container directory on the right
Go to the official website to search for the image gitlab-ce-zh, pull the image:
docker pull twang2218/gitlab-ce-zh
Copy the code
In the docker-comemess. yml file directory, execute the startup command:
docker-compose up
Copy the code
The container is large and takes a long time to start. After the startup is successful, the system prompts you to change the password of the root user. After changing the password, you can log in as user root and the password you just changed.
Docker-compose private server
Nexus is a powerful repository manager, and once deployed, you can upload your OWN SDK here. After configuration, it can be pulled to use
- in
/usr/local/docker/nexus3
In the directory, create adocker-compose.yml
The configuration file contains the following contents:
version: '3.1'
services:
nexus:
restart: always
image: sonatype/nexus3
container_name: nexus
ports:
- 8081:8081
volumes:
- /usr/local/docker/nexus3/data:/nexus-data
Copy the code
- Start The Nexus3 container in the current directory:
docker-compose up
Copy the code
I/O permission is not available. The solution is to change the data directory permission of the current directory:
chmod 777 data
Copy the code
Close the previous container and restart it:
docker-compose down
docker-compose up
Copy the code
Use private servers in projects
- In the Maven
settings.xml
Add Nexus authentication information to server:
<server>
<id>nexus-releases</id>
<username>admin</username>
<password>admin123</password>
</server>
<server>
<id>nexus-snapshots</id>
<username>admin</username>
<password>admin123</password>
</server>
Copy the code
- Configuring automated Deployment
Add the following code to pom.xml:
<distributionManagement> <repository> <id>nexus-releases</id> <name>Nexus Release Repository</name> The < url > http://192.168.65.130:8081/repository/maven-releases/ < / url > < / repository > < snapshotRepository > <id>nexus-snapshots</id> <name>Nexus Snapshot Repository</name> The < url > http://192.168.65.130:8081/repository/maven-snapshots/ < / url > < / snapshotRepository > < / distributionManagement >Copy the code
The ID must be the same as that defined in the preceding node. The URL is the URL of the browse copy version in the Nexus
- Deploy to the warehouse
mvn deploy
Copy the code
- Nexus 3.0 does not support page uploads. You can run the maven command:
aliyun-sdK-osS-2.1.3.jarMVN deploy:deploy-file -DgroupId=com.aliyun. Oss -DartifactId= aliyun-sdK-oss-dversion =2.2.3 -Dpackaging=jar - Dfile = D: \ aliyun - SDK - oss - 2.2.3. Jar - Durl = http://127.0.0.1:8081/repository/maven-3rd/ - DrepositoryId = nexus - releasesCopy the code
Matters needing attention:
- You are advised to create an independent repository for third-party JAR package management when uploading the third-party JAR package to facilitate management and maintenance (maven-3rd).
- -DrepositoryId= Nexus – Releases corresponds to the ID name of the Servers configuration in settings. XML. (authorization)
- The order of project pull dependencies is:
Local -> Private -> official