preface
“No, it works for me,” is a phrase you’ve heard a lot in the past few years; Every time when it comes to installation and deployment, there are always a bunch of problems, after all, the operating system version, software environment, hardware resources, network and other factors at work, this will inevitably lead to development friends and operation buddies each other blame, in fact, most of the time has nothing to do with the system to be deployed. The best solution is to reduce the dissonance caused by differentiation while improving productivity. The emergence of Docker makes such problems readily solved, that is, the application and configuration dependencies are packaged into a deliverable running environment, which can be directly started and run. Of course, this is not limited to this. Let’s learn and explore together next.
The body of the
1. An overview of the
1.1 introduction of Docker
Docker is an open source application container engine developed in the Go language. An open platform for developing, delivering, and running applications that can be separated from the infrastructure for rapid software delivery.
Look at the Docker LogoDocker is like a small whale below. Each container (square) filled with it can be understood as a container. No matter what is inside the container, it can be packaged, stored and transported in the form of a container. That Docker doesn’t care what’s in the container,Unified management is based on the form of containers, which are isolated from each other, so multiple containers running on Docker do not affect each other.
Docker has been divided into CE (Community Edition) and EE (Enterprise Edition) since version 17.03. Usually, the Community Edition is sufficient, powerful and free.
1.2 the Docker architecture
Docker is a Client/server mode architecture (C/S), the Client(Client) and Docker daemon(daemon process) communication, the latter receives the Client instructions and execute. Describe the three processes in the figure above:
- The Client sends the Docker build directive, and the server (Docker daemon) executes it after receiving the directive, packaging the corresponding files and generating them into Images.
- The Client sends the Docker pull command, and the server (Docker daemon) executes after receiving the command. It searches for Images from the remote repository (Registry) and downloads them to the Docker host (DOCKER_HOST). If it cannot be found, an error is reported.
- The Client sends the Docker run directive, and the server (Docker daemon) executes it after receiving the directive. First, it searches for Images from the local server. If the local server exists, it directly starts the container instance through the image. If there are no Images locally, it is downloaded from Registry, and then a container instance is launched against the Images. If neither is found, an error is reported.
The above is just a general description of the execution process from the client to the server with three key instructions. In fact, there are many instructions, which will be specially arranged and shared later.
Definitions and functions of the terms above:
- Docker daemon: It is responsible for listening to instructions sent by clients and managing various objects of Docker, such as Images, Containers, and network.
- Client: The main way for users to interact with the Docker host is to send instructions and requests.
- Registry: It is used for the storage of various images. Docker Hub is the largest image repository, and almost all images can be found in ordinary times. In order to increase the pull speed, some domestic storage can be designated.
- Images: a read-only template for starting Containers; The analogy is easy to understand: the image is the Class in the programming language, and the container is the instance of Class new.
- Containers: A running instance of Images.
1.3 Docker benefits
- More agile development: Giving developers the freedom to define their environments, creating and deploying applications faster and easier, and allowing operations people the flexibility to respond quickly to change.
- High portability and scalability: Docker containers can be run in a variety of device environments, such as development computers, virtual machines, servers, etc. Applications and related services can be extended or dismantled in real time according to business requirements;
- Make full use of hardware resources: Docker is lightweight, fast to start, and can share public services. Unlike traditional virtual machines, the entire system needs to be virtualized separately, which occupies a lot of resources, but the speed is not fast enough. Docker containers are isolated from each other and do not conflict with each other, so many containers can be run at the same time to make full use of resources.
To say so much about the theory first is mainly for practical application. Once the theory is clearly used, it will naturally become clear.
2. Install
The host environment installed here is the cloud server I bought before, the system is CentOS7, other system version installation will be different, specific details can refer to the small partner’s official website (docs.docker.com/get-docker/), the steps are very detailed.
1. Remove and move old versions
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
Copy the code
2. Install required dependency packages
sudo yum install -y yum-utils
Copy the code
3. Set the image warehouse
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Copy the code
Update Yum software package index
Sudo yum makecache fast #Copy the code
Start installing Docker
sudo yum install docker-ce docker-ce-cli containerd.io
Copy the code
6. Start Docker
sudo systemctl start docker
Copy the code
Test Docker
Sudo docker run hello-worldCopy the code
The above steps have completed the installation of Docker, but because the image is downloaded from abroad when pulling, it is slow, so we usually configure the image accelerator, domestic Tencent cloud, Ali Cloud and so on provide accelerated site, here is still using Ali cloud demo, because there is an account.
3. Early experience
After the installation is complete, here is not in a hurry to go on, first to experience; It’s easy to package and run your project as follows:
-
Prepare a project
Here we will directly create a default API project based on. NetCore3.1), do nothing.
-
Write Dockerfile
Add a Dockerfile file to the project root directory with the following contents:
The details are as follows
The FROM McR.microsoft.com/dotnet/core/aspnet:3.1-buster-slim WORKDIR/app COPY.. EXPOSE 80 ENTRYPOINT [" dotnet ", "DockerDemo.dll"]Copy the code
Set the file properties of Dockerfile to always copy as follows:
Publish the project as a file system, specifying a local directory, as follows:
-
Copy the published files to the Docker host
I will publish the project and copy the released files to my Ali Cloud server. The tool I use is FinalShell(a tool to connect the server and upload files, which is very useful), as follows:
-
Package as an image
Go to the release file directory and run the docker build command to package the release file as an image as follows:
Mydockerdemo in the image above is the image name, which can be customized; Use Docker images to check whether the image is generated, as follows:
-
Start the container (which contains our project) according to the image
After the image is generated, we can start the container according to the image through the Docker run directive, that is, start our project
docker run -d --name mydockerdemo -p 9999:80 mydockerdemo Copy the code
-d: runs in background mode.
–name: specifies a name for the running container;
-p: specifies port mapping. Port 9999 on the host is mapped to port 80 on the container, because our project starts on port 80 in the container.
The last parameter is the name of the image generated in the previous step, from which a container instance is started.
-
To test the access, as long as the cloud server’s security group and firewall are configured to release port 9999, then the extranet can be accessed, as follows:
May have a small partner said, also very troublesome; In fact, writing Dockerfile, packaging image and other operations are one-time, as long as the image is generated, the subsequent other environment directly according to the image can be started, no separate installation. NetCore runtime and other infrastructure, packaged image contains the full runtime environment.
conclusion
Here is a preliminary understanding of Docker, and installation and experience, the following article will be common commands, Dockerfile, container data volume mount, DockerCompose and DockerSwarm and other related knowledge in turn. Docker has become a must-have skill. If you don’t learn it, you will be Out. Follow the Code Variety Circle and learn it with me.