preface
With the development of cloud native technology, this content has become very important in development, operation and maintenance. The representative of cloud native containerization technology is Docker. This paper reviews the basic knowledge of Docker and serves as a starting point for opening cloud native.
1.1 introduction to Docker
1.1 background
In today’s enterprise development, with the characteristics of: high availability, high concurrency, high performance, security, monitoring and other characteristics.
-
High availability, high concurrency –> Solution: Server configuration –> Cluster.
For example, 12306 website, in the Spring Festival peak and normal traffic is not the same; Double Eleven, for example, is also different.
This phenomenon is uneven service busy and idle, waste of resources, how to solve these problems dynamically?
How to deploy multiple server application, need to spend a lot of manpower and material resources, how to deploy efficiently?
Service container (availability). If one application fails, all applications on that node may crash. For example, the CPU is full.
During the formal deployment, ensure that the development environment and production environment must be consistent, how to miss the allocation?
-
The new solution has Docker container deployment in view of the current situation of previous deployment
1.2 Docker overview
Docker is an open source application container engine that allows developers to package their applications and dependencies into a portable container and then distribute them to any popular Linux machine, as well as virtualization.
As the name suggests, Docker is a container engine. Containers aside, what is an engine?
For example: The engine of a car is the engine that makes it work, and Docker is an engine that allows developers to keep their apps in an isolated, portable container that can be easily deployed on a variety of machines regardless of compatibility.
Therefore, the logo of Docker is a whale carrying various small containers, whose role is that the whale acts as an engine, while containers correspond to each container.
So what is a container?
In technical terms, a container is a lightweight, portable, self-contained software packaging technology that allows applications to run the same way almost anywhere. Containers share the same set of operating system resources. Since containers share the kernel of the primary operating system, they cannot run a different operating system on the server from the primary server, that is, they cannot run Windows on a Linux server.
1.3 为什么要docker
Docker can automate the packaging and publishing of Web applications
Automated testing and continuous integration, release.
Deploy and adjust databases or other backend applications in a service environment.
Build your own Pass environment from scratch or extend an existing platform.
IaaS: (Infrastructure as a Service). For example, we buy a server, deploy our own code and install our own open source software. Paas :(platform as a service). Paas provide servers and installation packages, and you only need to develop your own applications. Saas: (Software as a service). Saas provides servers and code, and we only pay for it.Copy the code
1.3.1 Simplified procedures
Docker allows developers to package their applications and dependencies into a portable container and then distribute them to any popular Linux machine for virtualization. Docker changes the way of virtualization, so that developers can directly put their work into Docker for management. Convenience has been the biggest advantage of Docker. Tasks that used to take days or even weeks can be completed in seconds in Docker containers.
1.3.2 Savings
The combination of Docker and cloud makes full use of cloud space, not only solves the problem of hardware management, but also changes the way of virtualization. It pulls out common deployments to save space.
1.3.3 Continuous delivery and deployment
With Docker, continuous integration, continuous delivery and deployment can be achieved by customizing application images. Developers can build images using Dockerfile and integrate them with the continuous integration system for integration testing, while operations can quickly deploy images directly into production or even deploy them automatically with the continuous deployment system. And using Dockerfile makes the image build transparent, helping to better deploy the image in a production environment.
1.3.4 Easy Migration
Docker ensures the consistency of the execution environment, making application migration easier without worrying about the situation that the application cannot run properly due to the change of the environment.
1.4 Differences between Docker and VIRTUAL Machine
As a lightweight virtualization method, Docker has significant advantages over traditional VIRTUAL machine in running applications:
- Docker containers are fast and can be started and stopped in seconds, much faster than traditional virtual machines.
- Docker containers have little demand for system resources, and thousands of Docker containers can run simultaneously on a host.
- Docker facilitates users to obtain, distribute and update application images through operations similar to Git, with simple instructions and low learning costs.
- Docker supports flexible automatic creation and deployment mechanisms through Dockerfile configuration files to improve work efficiency. In addition to running applications, Docker container basically does not consume additional system costs, so as to ensure application performance and minimize system overhead. To run N different applications in traditional virtual machine mode, N virtual machines need to be started (each virtual machine needs to allocate exclusive memory, disk and other resources separately), while Docker only needs to start N isolated containers and put the applications into the containers. Of course, the traditional virtual machine approach has an extra layer of isolation in terms of isolation. But that doesn’t mean Docker isn’t secure. Docker leverages multiple protection mechanisms on Linux systems to achieve strict and reliable isolation. Since version 1.3, Docker has introduced security options and mirror signature mechanisms that greatly improve the security of using Docker.
1.2 the Docker architecture
1.2.1 profile
Docker uses a client-server architecture that uses remote apis to manage and create Docker containers.
Docker containers are created by Docker images.
The relationship between containers and images is similar to that of objects and thunder in object-oriented programming.
1.2.2 Basic Concepts of Docker
The aforementioned Docker has three basic concepts to understand:
- Image
- Container
- Repository
Docker’s repository is like Maven, and it houses a lot of images, like JARS,
- Maven’s central repository is at mvnrepository.com/.
- Docker’s central warehouse address is hub.docker.com/
An image can create many containers, just as a class in Java can create many objects. For example, based on the mysql image, I constructed 10 mysql containers, which are conveniently isolated from each other.
1.2.3 Docker engine
As can be seen from the figure above, Docker Engine contains three core components (Docker CLI, REST API and Docker Daemon). The details of these three components are as follows.
● Docker CLI: Docker command line interface. Developers can use Docker-related commands to interact with the Docker daemon and manage entities such as image, Container, network and data volumes.
● REST API: represents the application API interface, through which the developer can interact with the Docker daemon, so as to instruct the background to carry out relevant operations.
● Docker daemon: the server component of Docker. It is a daemon running in the background of Docker architecture. It can receive and process instructions from the command line interface and API interface, and then carry out corresponding background operations.
1.3 the Docker warehouse
As we have learned before, Docker’s warehouse is used to store images, and Docker Hub provides a huge cluster of images for use.
Docker officially maintains a public repository, Docker Hub, which already contains more than 2,650,000 images. Most of the requirements can be met by downloading the image directly from the Docker Hub.
Repositories are divided into public and private repositories, similar to the Github code management repository.
- Public repository: default, provides a large number of official images, domestic access can use accelerators, such as Ali Cloud accelerator.
- Private warehouse: A user builds a private warehouse locally for private use.
1.4 Docker installation
To install the Docker engine, you need maintenance versions of CentOS 7 or 8. Archive versions are not supported or tested. Centos7 kernel versions are 3.10 or older.
Docker reference see docs.docker.com/engine/inst…
1. Verify the version
Since March 2017, Docker has been divided into two branch versions: Docker CE and Docker EE.
Docker CE is a community free version, and Docker EE is an enterprise version, which emphasizes security but requires payment.
This article installs Docker CE.
[root@VM-16-6-centos ~]# uname -r
3.10.0-1160.11.1.el7.x86_64
Copy the code
2. Remove the old version
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
Copy the code
3, install some necessary system tools
Install required software packages. Yum-utils provides the yum-config-manager application, and DEVICe-mapper-persistent-data and LVM2 require devicemapper to store drivers.
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
Copy the code
4. Add software source information
Source 1 :(official recommendation)
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Copy the code
Source 2 :(ali yunyuan) this is used in China
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Copy the code
Update the YUM cache
sudo yum makecache fast
Copy the code
6. Install docker-CE
sudo yum -y install docker-ce
Copy the code
7. Start the Docker background service
sudo systemctl start docker
Copy the code
8. Restart the Docker service
sudo systemctl restart docker
Copy the code
9. Check the Docker version
[root@VM-16-6-centos ~]# docker version Client: Docker Engine - Community version: 20.10.11 API version: Dea9396 Built: Thu Nov 18 00:38:53 2021 OS/Arch: Linux /amd64 Context: Default Experimental: true Server: Docker engine-community Engine: Version: 20.10.11 API Version: 1.41 (minimum version 1.12) Go version: go1.16.9 Git commit: 847DA18 Built: Thu Nov 18 00:37:172021 OS/Arch: Linux/amd64 Experimental: false containerd: Version: 1.4.12 GitCommit: 7 b11cfaabd73bb80907dd23182b9347b4245eb5d runc: Version: 1.0.2 GitCommit: v1.0.2-0 -g52b36A2 Docker-init: Version: 0.19.0 GitCommit: de40ad0Copy the code
Delete docker
sudo yum remove docker-ce
sudo rm -rf /var/lib/docker
Copy the code
1.5 Docker Image Accelerator
Due to the domestic network problems, it will be slow to pull the Docker image, so we need to configure the accelerator to solve it.
Docker official and domestic service providers provide domestic accelerator services, such as:
- Docker Official Registry Mirror China
- Ali Cloud accelerator
- Tencent Cloud Accelerator
Because I this server is Tencent cloud, I will use Tencent cloud to configure.
Ali cloud accelerator see: developer.aliyun.com/article/299…
1. Create a file
vim /etc/docker/daemon.json
Copy the code
2. Add the configuration
{
"registry-mirrors": ["https://mirror.ccs.tencentyun.com"]
}
Copy the code
3, restart
systemctl daemon-reload
systemctl restart docker
Copy the code
4. Docker accelerator check
Docker Info The configuration is successful if the image address is displayed.
1.6 Docker mirror
Image is one of the three components of Docker.
Before Docker can run the container, the corresponding image needs to exist locally. If it does not exist locally, Docker will download it from the image repository.
1.6.1 Docker Obtaining an Image
As mentioned earlier, Docker Hub has a large number of high-quality images to use, here we say how to obtain these images
Find the mirror
Use the Docker search command to search for images. Whether it is official and the number of starts will be listed later.
Access to the mirror
The command to obtain the image from the Docker image repository is Docker pull, and its command format is:
Docker pull [options] [Dockers Registey address [: port number]/] repository name [: tag]Copy the code
You can use the docker pull –help command to view the details.
For example, download Tomcat
Docker pull Tomcat: Version number // If the version number is not specified, it indicates the latest versionCopy the code
List the mirror
To list images that have been downloaded, use the docker image ls command.
docker images // docker image ls
Copy the code
The list contains the warehouse name, label, mirror ID, creation time, and occupied space.
The mirror ID is the unique identifier of the mirror and the ID of the corresponding person. It can be represented by multiple TAG tags for different versions. Docker common parts of the docker in the same image will be extracted to save space.
1.6.2 Docker Deleting a Local Image
To delete an image, ensure that the image is not currently in use by any container
Docker Image RM Image IDCopy the code
1.6.3 Other Auxiliary Docker commands
View the IMAGE ID of the local IMAGE
docker images -q
Copy the code
View the making of an image
Docker history Image nameCopy the code
1.6.4 Docker Saving the Image
Back up the image of the local repository
1. Run the save command to save the image of the local repository to the current directory
Docker save -o tomcat.lei.tar Image nameCopy the code
2. Import the image backup file in the local directory to the local Docker repository
[root@VM-16-6-centos docker]# docker save -o tomcat.lei.tar tomcat [root@VM-16-6-centos docker]# ls daemon.json key.json tomcat.lei.tarCopy the code
3. Import the image backup file in the local directory to the local Docker repository
docker load -i tomcat.lei.tar
Copy the code
1.7 the Docker container
Containers are the core concept of Docker.
Simply put, a container is a single application or group of applications that run independently, and the environment in which they run.
A virtual machine, by contrast, can be understood as a simulation of a running operating system (providing a running environment and other system environments) and the applications running on it.
1. Check the container status
Docker container ls docker container ls-aCopy the code
Docker starts the container
There are two ways to start containers: either by creating a new container based on the image and starting it, or by restarting containers in the stopped state.
Docker run parameter Image name: tag Command executed
Common parameters:
-t virtual a tty -d background running container for the standard input of the container --rm After the container is started, Destroy after executing the command or program --name Give the container a custom name -p Host: internal portCopy the code
Docker run –rm -d –name tomcat1 -p 8080:8080 tomcat
Practice tomcat
[root@VM-16-6-centos docker]# docker run -d --name tomcat-8080 -p 8080:8080 tomcat bd1b5fa092eb084984aa4ac55367e20a3d58937f888b5e757087af3191de1c9f [root@VM-16-6-centos docker]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bd1b5fa092eb tomcat "catalina.sh run" 6 seconds ago Up 5 seconds 0.0.0.0:8080 - > 8080 / TCP, : : : 8080-8080 / TCP tomcat - 8080 >Copy the code
3. Stop the container
Docker stop Image ID \ name
Docker stop $(docker ps -a -q) close all containersCopy the code
4. Start and stop the container
Docker start Image ID \ name
5. Delete the container
Docker RM Image ID \ name. To delete a container, stop the running container.
6. Enter the container
Sometimes you need to enter the container to operate, use the Docker exec command
– I – t parameters
Docker exec can be followed by multiple parameters, here mainly describes the -i -t parameter.
When the -i parameter is used, the interface does not have the Familiar Linux command prompt because no dummy terminal is assigned, but the command execution result can still be returned.
When the -i -t parameters are used together, the familiar Linux command prompt is displayed.
Docker exec -it Container ID (Names) bashCopy the code
Example:
Once in the container, make page changes to the default Tomcat and visit again to see the results.
Note that the Linux package in the default container is minimum installed. Only the most basic commands.
Exit: Does not cause the container to stop.
root@bd1b5fa092eb:/usr/local/tomcat/ROOT# echo 'xiao-lei'>>index.html
root@bd1b5fa092eb:/usr/local/tomcat/ROOT# ls
index.html
Copy the code
7. Exchange files between host and container
Copying files between hosts and containers can be used as follows:
Docker ps CONTAINER: PATH LOCALPATH / / copies from the CONTAINER to hosting docker cp LOCALPATH | - CONTAINER: the PATH / / hosting is copied to the CONTAINERCopy the code
The host machine copies a file to the container: copies a.txt to the specified directory in the container
docker cp a.txt tomcat-8080:/usr/local/tomcat/webapps/ROOT
Copy the b.txt in the container
docker cp tomcat-8080:/usr/local/tomcat/webapps/ROOT/b.txt /root
1.8 Docker Viewing logs
Docker View logs
Docker logs -F-T --since="2021-11-30" --tail=10 tomcat-8080 docker logs -F-T --tail=10 tomcat-8080Copy the code
–since: This parameter specifies the start date of output logs, that is, only output logs after the specified date
-f: Displays real-time logs
-t: displays the date when a log is generated
-tail = 10: Displays the last 10 logs
Tomcat-8080: container name
1.9 case
Now that we have learned the basic Docker commands, let’s deploy a project to the Tomcat server using Docker.
Copy the file to webApp, you can open access.
1.10 Docker Data Volumes
Problem: Create a container by mirroring. Once the container is destroyed, the data in the container is deleted altogether. However, in some cases, images uploaded through the server will be lost, and the data in the container will not be persistent. Is there a single container that can persist to multiple containers? This is the data volume.
1.10.1 What is a Data Volume
Data volume: A special directory that can be used by one or more containers
Features:
- Data volumes can be shared and reused between containers
- Changes to data volumes take effect immediately
- Data volume updates do not affect mirroring
- By default, the data volume will always exist, even if the container is deleted
To solve these problems, Docker introduces the data Volume mechanism. Data volumes are specific files or folders in one or more containers, which exist in the host in a form independent of the Docker file system.
The biggest characteristic of a data volume is that its production cycle is independent of the lifetime of the container.
Data volume scenarios:
- Data is shared among multiple containers. Multiple containers can mount the same data volume in read-only or read-write mode to share data on the data volume.
- When the host cannot guarantee the existence of a certain directory or fixed path files, data volumes can be used to circumvent this limitation.
- When you want to store data in a container outside of the host, such as on a remote host or in the cloud.
- When you need to back up, restore, and migrate container data between different hosts, data volumes are the best choice.
1.10.2 Data Volume Application
1. Create a data volume
Docker volume create Data volume name After a data volume is created, it is stored in the /var/lib/docker/volume/ data volume name /_data directory by default
2. View the data volume
Docker Volume Inspect Data volume name
3. View information about all data volumes
docker volume ls
4. Delete the data volume
Docker Volume Rm Data volume name
5. The application deletes the volume
Docker run – v data volume name (host machine path) : container path mirror ID (when you map data volume, if the data volume does not exist, docker will help you to automatically create)
Docker run -v path: internal path image ID (directly specify a path as the storage location of the data volume, recommended)
Use docker run -v to create data volume directly:
Run the following command to create a data volume: docker volume create tomcat-volume Run the following command to view all data volumes: docker volume ls Start: docker run -v /www/package/hydrogen/:/usr/local/tomcat/webapps/hydrogen2 tomcatCopy the code
Two, common software installation
2.1 Basic Installation
2.1.1 nginx
Pull the Nginx image
Docker pull nginx: 1.20Copy the code
View the list of local mirrors (see nginx)
docker images
Copy the code
Run the container
Docker run --name nginx-test -p 800:80 -d nginx # map local port 800 to internal port 80Copy the code
Enter the container:
- The /etc/nginx directory is the configuration file directory
- The /usr/share/nginx directory installs static HTML files
- /usr/sbin/nginx is the startup script
In the container is the clean version, so we need to map the configuration file in the container to the host, and map it to the local directory.
Mount a mapped volume
-
1. Create a directory on the host
mkdir -p /usr/local/nginx Copy the code
-
2. Create three file directories in this directory
mkdir -p /usr/local/nginx/html mkdir -p /usr/local/nginx/logs mkdir -p /usr/local/nginx/conf Copy the code
-
3. Copy the configuration file
Conf /usr/local/nginx/conf Docker cp container ID :/etc/nginx/conf.d /usr/local/nginx/confCopy the code
-
4. Modify the configuration file
Will usr/local/nginx/conf/nginx. Conf contents of the include the default path to the current pathCopy the code
-
Delete the nginx container
docker rm -f nginx Copy the code
-
6. Create a container, mount the configuration file, and map ports
docker run -d -p 800:80 --name nginx-800 -v /www/server/nginx/html:/usr/share/nginx/html -v /www/server/nginx/conf/nginx.conf:/etc/nginx/nginx.conf -v /www/server/nginx/conf/conf.d/default.conf:/etc/nginx/conf/conf.d/default.conf -v /www/server/nginx/logs:/var/log/nginx nginx Copy the code
The cluster structures,
In the previous configuration, I have finished the configuration of the single node. Now I will start the cluster configuration of nginx. Use nGINx load balancing to optimize which server to run on (high availability).
-
1. Go to the conf path of the host and modify the configuration file nginx.conf
Upstream {server 127.0.0.1:8080; / / add upstream between HTTP {} nodes (slow using localhost) # server 8080 127.0.0.1:8081 # server 8081}Copy the code
-
Conf/default.conf file in location {}, use proxy_pass to set the reverse proxy address:
“Http://” should be used and the address should be consistent with the name defined in upstream.
Proxy_pass Proxy address, forward to target server
location /{ proxy_pass http://nginxCluster; } Copy the code
2.1.2 mysql
1, docker pull mysql
Docker run -d –name mysql -p 3306:3306 mysql