Docker profile

Docker is an open source application container engine, developers can package their applications into containers, and then migrate to Docker applications on other machines, which can achieve rapid deployment.

Simply understood, Docker is a software containerization platform. Just like ships, trains and trucks transport containers regardless of the goods inside, software containers act as standard units of software deployment, which can contain different codes and dependencies.

Containerizing software in this way allows developers and IT professionals to deploy IT to different environments with little or no modification, and to quickly restore service if a failure occurs through mirroring.

Docker advantage

1. Feature advantages

2. Resource advantages

Docker basic concepts

Client: A Docker Client that accepts user commands and configuration identifiers and communicates with the Docker Daemon.

Images: a read-only template with instructions for creating Docker containers, similar to the operating system installation CD.

Containers: An operational instance of mirrors, whose relationship to Containers is analogous to object-oriented classes and objects.

Registry (repository) : is a service that centrally stores and distributes images. The most commonly used Registry is the official Docker Hub.

What has Docker changed?

Docker has changed the cloud service, making the ideal of cloud service integration and common gradually become possible. And Docker is already part of the cloud strategy, and many developers are planning to use Docker to move their business to the cloud. In addition, Docker has become the first choice for many developers to avoid being tied to cloud service providers.

Docker has changed product delivery by providing a complete set of solutions and processes for the entire life cycle of products.

Docker has changed the way of development, providing simplified environment configuration, encapsulated operating environment and unified environment. It also provides a means of rapid deployment.

Docker has changed testing, making multi-version testing extremely convenient, and making it easier to quickly build a test environment without developer intervention or construction.

Docker has changed operation and maintenance. The consistency of the environment makes operation and maintenance easier. Meanwhile, the support of hot update makes operation and maintenance no longer need to work overtime at midnight to deploy updates, and updates can be carried out at any time. It also allows you to quickly roll back to the specified version when a major problem occurs.

Docker changes the architecture. Automatic expansion support makes the architecture simpler, and the distributed system is easier to build and support. At the same time, legacy monomer applications are also easy to transform into modern applications.

All in all, in a way, Docker is a game changer in product development. Although Docker is a technology, it also brings new thinking, new processes and working methods. Docker is promoting the development of the industry. Docker has changed the world and is gradually becoming a reality… Pay attention to the road of technology of migrant elder brother public number, reply 1024 to obtain a 2TB information, help everyone better study technology.

Docker installation is used

Operating system: CentOS 7

1. Install dependencies

yum install -y yum-utils device-mapper-persistent-data lvm2
Copy the code

2. Add software sources

Yum - config - manager - add - 'http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # specified ali cloud image sourceCopy the code

3, install docker-CE (system kernel has certain requirements, Centos6 does not support)

Yum clean all yum makecache fast # yum -y install docker-ce docker-ce cli containerd.ioCopy the code

4, set to start and start

systemctl enable docker
systemctl start docker
Copy the code

5. View the version

docker version
Copy the code

Run example: Nginx

1. Search for and download the image

docker search nginx
docker pull nginx
Copy the code

2. Start a container and map the port locally

Docker run -d -p 8080:80 --name Nginx Nginx #Copy the code

3. Access the local mapping port

Common Docker commands

1. Mirror control

Docker push [OPTIONS] NAME[:TAG] Download image: docker pull [OPTIONS] NAME[:TAG] Commit image: Docker commit [OPTIONS] CONTAINER NAME[:TAG] Build IMAGE: docker build [OPTIONS] PATH Delete IMAGE: docker rmI [OPTIONS] IMAGE [IMAGE...] Docker images [OPTIONS] [REPOSITORY[: tag]] docker images SOURCE_IMAGE[: tag] TARGET_IMAGE[: tag]Copy the code

2. Container control

Docker start/restart CONTAINER Stop/kill CONTAINER Delete CONTAINER: docker rm [OPTIONS] CONTAINER [CONTAINER...] Docker rename CONTAINER CONTAINER_NEW Docker attach CONTAINER Run the docker exec CONTAINER COMMAND COMMAND to view CONTAINER logs: Docker logs [OPTIONS] CONTAINER docker ps [OPTIONS]Copy the code

3. Start the container

docker run [OPTIONS] IMAGE [COMMAND] [ARG...] -d: runs the container in background and returns the container ID. -I: runs the container in interactive mode, usually with -t. -T: reassigns a pseudo-input terminal to the container, usually with -i. Specify a name for the container -.net = "bridge" : specify the container's network connection type, support is as follows: bridge/host/none/container: < name > | id - p/p: port mapping, format as shown in figure:Copy the code

4. Other commands

Docker run --help copy files to container: docker cp custom.conf Nginx:/etc/ Nginx /conf.d/ update container startup items: Docker container update --restart=always Nginx Check docker logs: tail -f /var/log/messagesCopy the code

Reference:Of these 20 Docker commands, how many do you know?For more information, please refer to the official website:docs.docker.com/engin…

Docker image build

1.Docker commit (1 Run 2 modify 3 save)

Docker run-dit -p 8080:80 --name Nginx Nginx Conf Nginx:/etc/ Nginx /conf.d/ # Save the container as a new image docker commit Nginx ZWX/NginxCopy the code

2.Dockerfile (1 write 2 build)

Docker build -t ZWX /nginx. #Copy the code

Docker local repository

1. Pull the mirror warehouse

docker search registry
docker pull registry
Copy the code

2. Start the image service

Docker run-dit --name=Registry # specify container name -p 55:5000 Thus, every time docker restarts, the warehouse container will also automatically start -- Privileged =true # Generally don't add the -v/usr/local/my_registry: / var/lib/registry registry # keep warehouse image data to the host machineCopy the code

3. Register THE HTTPS protocol (you need to download the image from the local repository, which requires configuration).

Vim /etc/docker/daemon.json {"insecure-registries":[" xx.xx.xx.xx:5000"]} #Copy the code

4. Add tag to indicate warehouse address

Docker tag ZWX/nginx 7.0.x.x 7.0.x.x 7.0.x.x x: 5000 / ZWX/nginx # if the warehouse address is specified at build time, can be omittedCopy the code

5. Upload the image to the local repository

docker push x.xx.xx.xx:5000/zwx/nginx
Copy the code

6. Check the local warehouse

curl -XGET http://x.xx.xx.xx:5000/v2/_catalog
Copy the code

Docker with graphics management tool Portainer

1. Introduction Portainer is a graphical management tool of Docker, providing status display panel, rapid deployment of application templates, and basic operations of container mirroring network data volumes (including uploading and downloading images, creating containers, etc.).

Functions including event log display, container console operation, Swarm cluster and service centralized management and operation, and login user management and control. The function is very comprehensive, can basically meet all the needs of small and medium-sized units for container management.

2. Installation and use

Docker run -d -p 9000:9000 # Portainer default port is 9000, mapped to local 9000 port, By local address access - restart = # always set up automatic restart - v/var/run/docker. The sock: / var/run/docker. The sock # single must specify the docker. The sock - name Prtainer portainer/portainerCopy the code

accesshttp://localhost:9000, you need to register a user for the first login, set a password for the admin user, and then select local connection for the standalone version.Control management

Docker and the cluster management tool Swarm

1. Introduction

Docker Swarm is a cluster management tool officially provided by Docker. Its main function is to abstract several Docker hosts into a whole, and manage various Docker resources on these Docker hosts through a unified portal.

2. Installation and use

Docker Swarm was a separate project before Docker 1.12. After Docker 1.12 was released, the project was merged into Docker and became a subcommand of Docker.

To start a swarm, run the following initialization command:

Docker swarm init --advertise-addr xx.xx.xx.xx --listen-addr xx.xx.xx.xx:2377 --advertise-addr xx.xx.xx.xx:2377 The default value is 2377Copy the code

Setting the Manager Node

Docker swarm join-token manager Docker swarm join --advertise-addr xx.xx.xx.xx --listen-addr xx.xx.xx.xx:2377 --token SWMTKN-1-29ynh5uyfiiospy4fsm4pd4xucyji2rn0oj4b4ak4s7a37syf9-ajkrv2ctjr5cmxzuij75tbrmz xx.xx.xx.xx:2377Copy the code

Setting the worker node

Docker swarm join-token worker docker swarm join-token worker Docker swarm join --advertise-addr xx.xx.xx.xx --listen-addr xx.xx.xx.xx:2377 --token SWMTKN-1-29ynh5uyfiiospy4fsm4pd4xucyji2rn0oj4b4ak4s7a37syf9-ajkrv2ctjr5cmxzuij75tbrmz xx.xx.xx.xx:2377Copy the code

Look at the node

docker node ls
Copy the code

Create a service

docker service create [OPTIONS] IMAGE [COMMAND] [ARG...] --detach, -d: specify whether the container is running in the foreground or background. Default is false --name: service name --network: network connection --publish, -p: port mapping --env, -e: set environment variables --tty, -t: Assigns a TTY device that can support terminal logins --mount: files are mounted --replicas: specifies the number of tasksCopy the code

What are the similarities and differences between K8s and K8s?

  • A) Different birth

Google based on its experience in Linux container management, transformed to docker management, is Kubernetes. His performance is good in many ways, most importantly, built on years of valuable experience at Google.

Kubernetes is not written for Docker,kubernetes takes clustering to a whole new level at the cost of a steep learning curve. Docker-swarm takes a different approach and is docker’s native clustering tool.

The most convenient part is that it exposes the docker standard programming interface, which means that any tools you have been using to communicate with Docker (Docker CLI, Docker compose, etc.) can be seamlessly used on Docker Swarm.

  • B) Different installation and configuration

Swarm is simple, straightforward and flexible to install. All we need to do is install a service discovery tool and install swarm containers on all the nodes.

By comparison, kubernetes installation is a bit more complicated and arcane. The installation varies from operating system to operating system. Each operating system has its own separate installation instructions.

  • C) Different operation modes

Using Swarm is no different from using containers. For example, if you are used to using the Docker CLI (command line interface), you can continue to use almost the same commands.

If you’re used to using Docker Componse to run containers, you can continue to use it in Swarm clusters. Regardless of how you’re used to using containers, you can still use them, just at a larger level of clustering.

Kubernetes requires you to learn its own CLI (command line interface) and configuration. You can’t use the docker-comemage. yml configuration you created earlier, you have to create a new configuration corresponding to Kubernetes.

You also can’t use the Docker CLI (command line interface) that you learned earlier. You have to learn Kubernetes CLI.

Finally, when it comes to choosing between Docker Swarm and Kubernetes, consider the following:

  • Do you want to rely on Docker itself to solve clustering problems? If so, select Swarm. If something isn’t supported in Docker, it probably won’t be found in Swarm, which relies on the Docker API.
  • On the other hand, if you want a tool that can work around Docker’s limitations, Kubernetes is a good choice. Kubernetes is not based on Docker, but on Google’s years of experience managing containers. It does things on its own terms.

Docker operation and maintenance flow chart

Docker configuration management

  • 1. After using containers, do I still need configuration management?

At first, we were idealistic, like Docker officials. The naive belief that containers should be inmutable is that when configuration changes are needed, the image is rebuilt and redeployed.

Based on this idea, we have added an image auto-build module in cSphere, where users can configure the address of the code repository. The configuration file of the service is stored in Git or SVN library. When the configuration needs to be changed, Push it to the version library, and the image construction will be triggered by hook automatically, and the online container reconstruction will be completed automatically.

Through this system, users can easily batch update online services, not limited to configuration file changes, code changes are also supported. After practical use, this system can well meet the needs of development and testing environment, improve work efficiency.

But, when used in a production environment, we found that the process is actually not so perfect, mainly displays in: the mirror to build and deploy although automation, but the building is for a warehouse in VCS, change a line configuration will have to rebuild the whole, the update container needs to be the mirror to distribute to all machines, configuration changes too slow. A configuration change in this manner may involve restarting the service, which is unacceptable in some scenarios in the production environment and may cause a temporary service interruption.

  • 2. What should be done to apply the configuration file?

Docker application profiles remain able to support changes for different environments. In addition, the configuration file can be changed online and takes effect upon restart. Generally divided into the following two ways.

A) Docker environment variables

You need to figure out in advance what parameters will change frequently when you deploy the container, then pull them out to make the container’s environment variables, and then fill in the different parameters when you deploy the container. However, if some parameters are found to be changed in different deployment scenarios, you need to re-create the image.

B) Apply the configuration file

The above management mode is not flexible. The flexible management mode is to separate the configuration file from the image, so that the image will not be bound.

[! [attachments-2020-11-xGx3saDT5fa8a8d57eff6.jpg](https://six.club/image/show/attachments-2020-11-xGx3saDT5fa8a8d57eff6.jp g)](https://six.club/image/show/attachments-2020-11-xGx3saDT5fa8a8d57eff6.jpg)
Source: https://www.cnblogs.com/leozhanggg/p/12039953.html