directory

• Write it first

• Why do we use Docker?

•Docker core components

Docker client and server

Docker mirror

Docker container

Registry

• install Docker

• Handy use of Docker

Using DockerHub

What’s Dockerfile

Run the server Web program

•Docker instruction summary and learning address summary

Docker common instructions

Dockerfile common directives


• Write it first

If you use Docker, I think you will definitely be attracted by it, because it can help ensure that applications deploy quickly, reliably and consistently, and mom will no longer have to worry about me changing the phone and installing the environment and deploying bugs. To use a figurative metaphor, if you fall in love with GitHub, I believe you will fall in love with Docker. Let’s get down to business. What can you learn from this passage?

  • Clear Docker architecture hierarchy (this is still very important, there is a clear architecture in mind, learn everything clearly);
  • Docker installation on Linux
  • Get started with Docker (run a program and understand the process through use)
  • Docker instruction summary and in-depth study address summary (teach people to fish is better than teach people to fish, can be readily as a reference manual, also tell you how to in-depth study)

Treasure book in hand, the world I have, hurriedly continue to learn to go down (feel to see the harvest of the point of attention and give a praise!! .

• Why do we use Docker?

What if we develop an application that needs to run on a different machine? Package the program, then install the system, environment, dependencies, configurations, servers on the new machine, and when you’re ready, run the program. This process is not only inefficient, but also prone to various problems during the process, and it can even cause problems when normal programs run because the environment is not configured properly, which is very annoying. At this point, if we can package the entire environment dependencies and programs, then when we change the machine, we can run directly. At this point, many people may think of using virtual machine, after all, virtual machine can simulate the entire operating system level environment and even hardware, but I want to tell you that Docker is better than virtual machine, as to why. You just have to read down to find out.

The traditional virtual machines we use, such as VMware and VisualBox, need to simulate the whole machine including hardware. Each virtual machine needs to have its own operating system. Once the virtual machine is started, all the resources allocated to it will be occupied. Each virtual machine includes applications, necessary binaries and libraries, and a complete user operating system. Docker uses container technology to share hardware resources and operating system with our host computer, which can realize dynamic allocation of resources. Containers contain applications and all their dependencies, but share the kernel with other containers. Containers run as separate processes in user space in the host operating system. Container technology is an approach to operating system virtualization that lets you run applications and their dependencies in resource-isolated processes. By using containers, you can easily package your application’s code, configuration, and dependencies into easy-to-use building blocks that achieve environmental consistency, operational efficiency, developer productivity, and version control. Containers help ensure that applications are deployed quickly, reliably, and consistently, regardless of the deployment environment. Containers also give us more fine-grained control over our resources, making our infrastructure more efficient. We can intuitively reflect the difference between the two through the following picture.

If we look at the figure above, we get a sense of the container, but it’s not intuitive, so let’s put the real application in this hierarchy, which looks something like this.

Note that the container does not simulate a complete operating system, but rather isolates the process, creating a protective layer over the normal process. For the process inside the container, its access to various resources is virtual, thus achieving isolation from the underlying system. Docker packages the application and its dependencies in a single file. Running this file generates a virtual container. Programs run in this virtual container as if they were running on a real physical machine. With Docker you don’t have to worry about the environment. Overall, Docker interface is quite simple, users can easily create and use containers, put their own applications into the container. Containers can also be versioned, copied, shared, and modified just like normal code. It also has DockerHub, a hosting service like GitHub, which is very convenient, as we’ll talk about later. The advantages of Docker are summarized as follows

  • Docker starts fast in seconds. Virtual machines usually take a few minutes to start
  • Docker requires fewer resources. Docker is virtualized at the operating system level. Docker container interacts with the kernel with almost no performance loss, which is better than virtualization between the Hypervisor layer and the kernel layer
  • Docker is more lightweight, docker architecture can share a kernel and shared application library, occupying very little memory. In the same hardware environment, the number of images Docker runs is far more than the number of virtual machines, and the utilization rate of the system is very high
  • Compared with virtual machines, Docker is less isolated. Docker belongs to the isolation between processes, and virtual machines can achieve system-level isolation
  • Security: Docker is also less secure. The tenant root of Docker is the same as the host root. Once the user in the container is promoted from the ordinary user to root, it directly has the root permission of the host, and then can conduct unlimited operations. Vm tenant root and host root vm permissions are separated, and VMS use hardware isolation technology such as Intel VT-D and VT-X Ring-1 to prevent VMS from breaking through and interacting with each other. Containers do not yet have any form of hardware isolation. This makes the container vulnerable to attack
  • Manageability: Docker’s centralized management tools are not yet mature. All virtualization technologies have mature management tools. For example, VMware vCenter provides comprehensive VM management capabilities
  • High availability and recoverability: Docker’s high availability support for business is achieved through rapid redeployment. Virtualization has mature guarantee mechanism such as load balancing, high availability, fault tolerance, migration and data protection that has been tested in production practice. VMware can guarantee 99.999% high availability of virtual machines to ensure business continuity
  • Rapid creation and deletion: virtualization creation is at the level of minutes, while Docker container creation is at the level of seconds. Docker’s rapid iteration determines that a lot of time can be saved in development, testing and deployment
  • Delivery and deployment: VMS can use images to achieve consistency in environment delivery, but image distribution cannot be systematic. Docker records the container construction process in Dockerfile, which can achieve rapid distribution and rapid deployment in the cluster

Compared to virtual machines we can easily say the difference between them

features The container The virtual machine
Isolation level Process level Operating system level
Isolation strategy CGroups Hypervisor
System resources 0% ~ 5% 5% ~ 15%
The startup time Second level Minutes of class
Image storage KB-MB GB-TB
The cluster size Tens of thousands of Hundreds of
High availability policy Elasticity, load, dynamic Backup, Dr, and migration

•Docker core components

  • Docker client and server;
  • Docker mirror;
  • Registry;
  • Docker container;

Docker client and server

Docker is a client-server (C/S) architecture application. The Docker client simply issues a request to the Docker server or daemon, which does all the work and returns the result. Docker provides a command line tool Docker and a set of RESTful apis ®. You can run the Docker daemon daemon client on the same host, or connect from a local Docker client to a remote Docker guard process running on another host.

Docker Client, also known as Docker Client. It is actually the command line interface (CLI) tool Docker provides, and is the main way for many Docker users to interact with Docker. Clients can build, run, and stop applications, and interact with Docker_Host remotely. The most commonly used Docker client is the Docker command. We can easily build and run the Docker container on host by using the Docker command. To put it bluntly, we use the Docker client part in the command line, as shown below.

Docker Daemon is a server component that runs in the way of Linux background service. It is the most core background process of Docker. We also call it daemon. It is responsible for responding to requests from the Docker Client and then translating those requests into system calls to complete container-managed operations. This process starts an API Server in the background, which is responsible for receiving the requests sent by Docker Client. The received requests are distributed and scheduled through a route inside the Docker Daemon, and the specific functions execute the requests, as shown below.

Docker mirror

Docker image can be regarded as a special file system. In addition to providing programs, libraries, resources, configuration files required by the container runtime, Docker image also contains some configuration parameters prepared for the runtime (such as anonymous volumes, environment variables, users, etc.). The image does not contain any dynamic data and its contents are not changed after the build. An Image is a unified view of a bunch of read-only layers. If this definition is a little confusing, let’s look at the following figure.

We see multiple read-only layers stacked on top of each other. All but the lowest layer will have a pointer to the next layer. These layers are the implementation details inside Docker and can be accessed on the host’s file system. The Union File System technology is able to consolidate different layers into a single file system, providing a unified view of these layers, thus hiding the existence of multiple layers and, from the user’s perspective, only one file system. We can see the form of this perspective on the right side of the image. Images can be generated in several ways:

  • Create a mirror from scratch
  • Download and use an off-the-shelf image that someone else has created
  • Creates a new image on top of an existing image

 

 

 

 

Docker container

A container is almost exactly the same as an image, a unified view of a bunch of layers, the only difference being that the top layer of the container is readable and writable.

Since the definition of a container does not say whether to run the container or not, the container can simply be understood as: mirror plus read/write layer.

Registry

A Docker repository is a centralized repository for image files. Once the image is built, it can be easily run on the current host. However, if the image needs to be used on other servers, we need a centralized service to store and distribute the image. The Docker Registry is such a service. Repository and Registry are sometimes confused, but not strictly distinguished. The concept of a Docker repository is similar to Git, and the registry server can be understood as a hosting service like GitHub. In fact, a Docker Registry can contain multiple repositories. Each Repository can contain multiple tags, and each Tag corresponds to an image. So, the mirror repository is a place where Docker centrally stores image files similar to the code repository that we used to use.

Typically, a repository contains images of different versions of the same software, and labels are often used to match versions of the software. We can specify which version of this software is the mirror by using the format < repository >:< tag >. If no label is given, latest is used as the default label. Warehouses can be divided into two forms:

  • Public (public repository)
  • Private (private warehouse)

Docker Registry public repositories are Registry services that are open for users to use and allow users to manage images. These public services typically allow users to upload and download public images for free, and may provide a fee service to manage private images.

In addition to using public services, users can also set up the private Docker Registry locally. Docker officially provides the Docker Registry image, which can be directly used as a private Registry service. Once the user has created his own image, he can use the push command to upload it to the public or private repository, so that the next time the image is used on another machine, he can just pull it from the repository.

• install Docker

In order not to make this article too long, I will post another article of mine, which has Ubuntu16 and Windows10 installation method, which is the latest installation, of course, there is a warning here, If you have a conflict between VMware and hyper-v installed in Windows10, check out my other article on how to resolve the conflict.

• Handy use of Docker

Using DockerHub

First log in to DockerHub and create a repository. The process is shown below

After creation, see the following figure

Here we create a docker image from a Dockerfile. Later we will talk about Dockerfile. We randomly create a directory, and then create a Dockerfile in the directory. The Dockerfile instruction is as follows (I used Ubuntu16 here) :

cat > Dockerfile <<EOF
FROM busybox
CMD echo "Hello world! This is my first Docker image."
EOF
Copy the code

After editing, save, and continue to create the image below

docker build -t <your_username>/my-first-repo .  
# Notice the ". , "." Means to execute the Dockerfile in the directory
Copy the code

Once created, running your local image will create a container for execution. Note that the image is not used for execution, the container is used for execution, and the creation process looks something like this.

You are ready to execute. Use the following command

docker run <your_username>/my-first-repo
# like my Docker run Dengbocong /hello-world

Commit to a remote DockerHub
docker push <your_username>/my-first-repo

# View the current mirror
docker images

View all current containers
docker ps -a
Copy the code

Now let’s use the Hello-world example to see how Docker components work together. The container startup process is as follows:

  • The Docker client runs the Docker run command
  • The Docker Daemon found no local hello-world mirror
  • Daemon downloads images from Docker Hub
  • After downloading, the image hello-world is saved locally
  • Docker Daemon starts the container

What’s Dockerfile

Dockerfiles are the key to images and containers, and dockerfiles make it easy to define the contents of images. Dockerfile is a configuration file that automatically builds docker images. Users can use Dockerfile to quickly create customized images. The commands in Dockerfile are very similar to shell commands under Linux. We can intuitively feel the relationship among Docker image, container and Dockerfile through the following picture.

As we can see from the figure above, Dockerfile can customize the image, through the Docker command to run the image, so as to achieve the purpose of starting the container. A Dockerfile consists of a line of command statements and supports comment lines that begin with #. In general, we can divide a Dockerfile into four parts:

  • Base mirror (parent mirror) information instruction FROM
  • MAINTAINER message command
  • Mirror operation instructions RUN, EVN, ADD and WORKDIR, etc
  • Container start commands CMD, ENTRYPOINT, USER, etc

Here is a simple Dockerfile example to explain

FROM python:2.7
MAINTAINER Angel_Kitty <[email protected]>
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python"]
CMD ["app.py"]
Copy the code

We can analyze the above process:

  • Pull base image of Python 2.7 from Docker Hub
  • Displays information about the maintainer
  • Copy localhost < SRC > (the relative path of the Dockerfile directory) into the container
  • Specify the working path as /app
  • Installing dependency packages
  • Exposing port 5000
  • Start the app

How to start a Python Flask App Dockerfile (Flask is a lightweight Web framework for Python); how to start a Python Flask app Dockerfile

Run the server Web program

We ran a simple Docker in front, have an understanding of what Dockerfile is, so here we go further to run a server program in Docker, here use Dockerfile to build the image, very simple oh. Create a directory and a Dockerfile as follows


mkdir static_web
cd static_web
touch Dockerfile
Copy the code

Then use Vim to edit the Dockerfile as follows, save the edit and exit

FROM nginx
MAINTAINER Name <email>
RUN echo '

Hello, Docker!

'
> /usr/share/nginx/html/index.html Copy the code

Dockerfile (Dockerfile);

docker build -t angelkitty/nginx_web:v1 .
-t sets the warehouse and name for the new image, with Angelkitty as the warehouse
# library name, nginx_web for image name, :v1 for tag (do not add latest as default)

After the build, run with the following command
docker run --name nginx_web -d -p 8080:80   angelkitty/nginx_web:v1
Copy the code

This command will start a container with an nginx image, named nginx_web and mapped to port 8080, so that we can use a browser to access the nginx server: http://localhost:8080/ or http://local IP address :8080/, the following information is displayed:

•Docker instruction summary and learning address summary

Docker common instructions

Docker run -i -t ubuntu /bin/bash # Create container docker run --name firstContainer -d ubuntu /bin/bash Docker run --name firstContainer -i -t ubuntu /bin/bash docker start firstContainer # Start Container ID docker stop firstContainer # or Docker Stop container ID Docker attach firstContainer # or Docker attach container ID Docker logs firstContainer # Docker logs -ft firstContainer docker logs -ft firstContainer docker logs -f firstContainer Docker run --restart=always --name docker run --restart=always --name FirstContainer -d Ubuntu # inspect firstContainer docker rm container ID docker images docker ps # Docker search Ubuntu # from the docker hub public available to view image docker $HOME/. Dockercfg docker commit -m "A new custom image" --author "dengbocong # No tags, Docker will automatically set a latest tag for the image, the last ". Where to find the dockerfile or docker build -t username/repository: tag [email protected]: username/repository # here docker assumes that dockerfile exists in the root directory of Git repository. Docker build --no-cahe -t username/repository name: tag. Docker history Image ID docker run -d -p 80 --name Name username/repository name Run command docker push Docker images-a-q docker images-a-q docker images-a-qCopy the code

Dockerfile common directives

FROM: or FORM :

. All dockerfiles start with FROM. The FROM command specifies what image the Dockerfile image is based on, and all subsequent instructions will create images based on the FROM command. You can use the FROM command multiple times to create multiple images in the same Dockerfile. For example, if we wanted to specify the base image for Python 2.7, we could write it as follows:

The FROM python: 2.7 MAINTAINER MAINTAINER# Specifies the image creator and contact information. The MAINTAINER 
      
        format is used. Set this to ID and mailbox:
      

MAINTAINER Angel_Kitty <[email protected]>
Copy the code

COPY: COPY is used to COPY the localhost < SRC > (relative to the directory where the Dockerfile resides) into the container

. COPY is recommended when a local directory is used as the source directory. The format is COPY < SRC >

. For example, if we want to copy the current directory to the /app directory in the container, we can do something like this:

COPY. /app WORKDIR WORKDIR # Used with RUN, CMD, ENTRYPOINT commands to set the current working path. You can set it multiple times, or relative to the previous WORKDIR command if it is a relative path. The default # path is /. The general format is WORKDIR /path/to/work/dir. For example, if we # set /app path, we can do the following: WORKDIR /appCopy the code

RUN: RUN is used to RUN commands inside a container. Each RUN command adds a change layer to the existing image, which remains unchanged. The general format is RUN . For example, if we want to install python dependencies, we do as follows:

RUN pip install -r requirements.txt
EXPOSE
EXPOSE Using the # command, you can specify the open port. EXPOSE 
      
        [
       
        ...]
       
      
Copy the code
Open port 5000 for example:

EXPOSE 5000
ENTRYPOINT
ENTRYPOINT 
Make your container behave like an executable program. A Dockerfile
There can only be one ENTRYPOINT, if there are more than one, the last ENTRYPOINT will take effect.The ENTRYPOINT command also has two formats:"executable"."param1"."param2"] : RecommendedexecIn the form of ENTRYPOINTcommandParam1 param2: indicates the shell format# For example, if we want to make a Python image into an executable program, we can do this:

ENTRYPOINT ["python"]
Copy the code

CMD: the CMD command is used to start the container by default. The CMD command may or may not contain the executable file. If no executable file is included, one is specified with ENTRYPOINT, and the CMD arguments are used as arguments to ENTRYPOINT.

There are three formats for CMD commands: CMD ["executable"."param1"."param2"]  # exec form recommended.
CMD ["param1"."param2"]   No executable program form
CMD command param1 param2   # shell form.
Copy the code

Only one CMD can exist in a Dockerfile. If there are more than one CMD, the last one takes effect. By default, the shell of CMD calls /bin/sh -c to run the command. CMD commands are overwritten by arguments passed from the Docker command line: Docker run busybox /bin/echo Hello Docker overwrites CMD commands.

Address summary (comments are welcome)

Build Docker on ECS (CentOS7)

Novice tutorial

DockerInfo

Docker official website documentation

 

Finally, feel the harvest after watching a point of concern and give a thumb-up!!