What is a Docker?

Today, with the rapid development of computer technology, Docker is in full swing in China, especially in first-tier Internet companies. The use of Docker is very common and even becomes a plus for some enterprises in interviews. If you don’t believe me, take a look at the following picture.

This is the recruitment requirement of Java development engineer I saw on a recruitment website, among which familiarity with Docker becomes a plus for your quick entry, which shows the importance of familiarity with Docker in Internet companies.

Of course, for us CTF players, being familiar with Docker can quickly build the CTF environment, perfectly restore the scene of real loopholes in the game, and help us improve ourselves quickly.

, there has been a lot of good tutorial on the market, but a lot of things, the original reason the author thinks that the tutorial for beginners, it is hard to understand, and feel not clear (the author) feel quite meng force yourself, little detours for beginners, I will take my learning experience as well as a CTF player’s perspective, the writing this tutorial, Let’s take you to understand and skillfully use Docker. I wish all readers could gain a bonus chip in the future enterprise interview after learning this course and help you. I think it will be worth it.

With all that said, what exactly is a Docker?

Before we understand Docker, we first need to distinguish between two concepts, container and virtual machine.

Many readers may have used virtual machines, but the concept of containers is relatively new.

The traditional virtual machines we use, such as VMware and VisualBox, need to simulate the whole machine including hardware. Each virtual machine needs to have its own operating system. Once the virtual machine is started, all the resources allocated to it will be occupied. Each virtual machine includes applications, necessary binaries and libraries, and a complete user operating system.

Container technology shares hardware resources and operating system with our host computer, which can realize dynamic allocation of resources. Containers contain applications and all their dependencies, but share the kernel with other containers. Containers run as separate processes in user space in the host operating system.

Container technology is an approach to operating system virtualization that lets you run applications and their dependencies in resource-isolated processes. By using containers, you can easily package your application’s code, configuration, and dependencies into easy-to-use building blocks that achieve environmental consistency, operational efficiency, developer productivity, and version control. Containers help ensure that applications are deployed quickly, reliably, and consistently, regardless of the deployment environment. Containers also give us more fine-grained control over our resources, making our infrastructure more efficient. Through the following picture, we can intuitively reflect the difference between the two.

Docker is a package for Linux containers and provides an easy-to-use interface for container use. It is currently the most popular Linux container solution.

The Linux container is Linux developed another virtualization technology, to put it simply, the Linux container does not simulate a complete operating system, but to the process of isolation, equivalent to a normal process on the outside of a protective layer. For the process inside the container, its access to various resources is virtual, thus achieving isolation from the underlying system.

Docker packages the application and its dependencies in a single file. Running this file generates a virtual container. Programs run in this virtual container as if they were running on a real physical machine. With Docker you don’t have to worry about the environment.

Overall, Docker interface is quite simple, users can easily create and use containers, put their own applications into the container. Containers can also be versioned, copied, shared, and modified just like normal code.

The advantage of the Docker

Docker has more advantages over traditional virtualization:

  • dockerThe startup speed is in seconds. Virtual machines usually take a few minutes to start
  • dockerRequires fewer resources,dockerVirtualization at the operating system level,dockerThe container interacts with the kernel with almost no performance loss and better performance than passingHypervisorLayer and kernel layer virtualization
  • dockerA lightweight,dockerArchitecture can share a single kernel and shared application libraries with minimal memory footprint. The same hardware environment,DockerThe number of running images is much greater than the number of VMS, which has high system utilization
  • Compared to virtual machines,dockerLess isolation,dockerIsolation between processes. VMS can be isolated at the system level
  • Security:dockerIs also less secure.DockerThe tenantrootAnd the host machinerootSimilarly, once a user in a container is promoted from a normal user to root, it directly has the root permission of the host and can perform unlimited operations. Vm TenantrootPermissions and host machinerootVm rights are separated, and VMS are used asIntelVT-dVT-xring-1Hardware isolation technology, which prevents virtual machines from breaking through and interacting with each other, while containers do not yet have any form of hardware isolation, making them vulnerable to attack
  • Manageability:dockerCentralized management tools are not yet mature. All virtualization technologies have sophisticated management tools, such asVMware vCenterProvides comprehensive VM management capabilities
  • High availability and recoverability:dockerHigh availability support for business is achieved through rapid redeployment. Virtualization has proven guarantee mechanisms such as load balancing, high availability, fault tolerance, migration and data protection.VMwareVirtual machines can be committed99.999%High availability ensures service continuity
  • Quick create/delete: Virtual create is at the minute level,DockerContainer creation is at the second level,DockerFast iteration, determines whether the development, testing, deployment can save a lot of time
  • Delivery and deployment: VMS can use images to achieve consistency in environment delivery, but image distribution cannot be systematic.DockerDockerfileDocument the container construction process, enabling rapid distribution and rapid deployment in clusters

We can clearly see the advantages of containers over traditional virtual machine features in the following table:

features The container The virtual machine
Start the Second level Minutes of class
The hard disk to use Generally for MB Generally for GB
performance Close to the native Weaker than
System support Supports thousands of containers on a single machine Usually dozens

The three basic concepts of Docker

From the figure above, we can see that Docker includes three basic concepts:

  • Image(mirror)
  • Container(container)
  • Repository(warehouse)

Image is the premise of Docker running container, warehouse is the place to store image, visible image is the core of Docker.

Image (Image)

So what is a mirror image?

Docker image can be regarded as a special file system. In addition to providing programs, libraries, resources, configuration files required by the container runtime, Docker image also contains some configuration parameters prepared for the runtime (such as anonymous volumes, environment variables, users, etc.). The image does not contain any dynamic data and its contents are not changed after the build.

An Image is a unified view of a bunch of read-only layers. If this definition is a little confusing, the following Image will help you understand the definition of an Image.

On the left we see multiple read-only layers stacked on top of each other. All but the lowest layer will have a pointer to the next layer. These layers are the implementation details inside Docker and can be accessed on the host’s file system. The Union File System technology is able to consolidate different layers into a single file system, providing a unified view of these layers, thus hiding the existence of multiple layers and, from the user’s perspective, only one file system. We can see the form of this perspective on the right side of the image.

Container (Container)

A container is almost exactly the same as an image, a unified view of a bunch of layers, the only difference being that the top layer of the container is readable and writable.

Since the container definition does not say whether to run the container, in fact, the container = mirror + read/write layer.

Repository

A Docker repository is a centralized repository for image files. Once the image is built, it can be easily run on the current host. However, if the image needs to be used on other servers, we need a centralized service to store and distribute the image. The Docker Registry is such a service. Repository and Registry are sometimes confused, but not strictly distinguished. The concept of a Docker repository is similar to Git, and the registry server can be understood as a hosting service like GitHub. In fact, a Docker Registry can contain multiple repositories. Each Repository can contain multiple tags, and each Tag corresponds to an image. So, the mirror repository is a place where Docker centrally stores image files similar to the code repository that we used to use.

Typically, a repository contains images of different versions of the same software, and labels are often used to match versions of the software. We can specify which version of this software is the mirror by using the format < repository >:< tag >. If no label is given, latest is used as the default label.

Warehouses can be divided into two forms:

  • public(Public warehouse)
  • private(Private warehouse)

Docker Registry public repositories are Registry services that are open for users to use and allow users to manage images. These public services typically allow users to upload and download public images for free, and may provide a fee service to manage private images.

In addition to using public services, users can also set up the private Docker Registry locally. Docker officially provides the Docker Registry image, which can be directly used as a private Registry service. Once the user has created his own image, he can use the push command to upload it to the public or private repository, so that the next time the image is used on another machine, he can just pull it from the repository.

We mainly elaborated some common concepts of Docker, such as Image, Container and Repository, as well as the advantages of Docker from the perspective of traditional virtualization. We can see the architecture of Docker intuitively from the following figure:

Docker uses a C/S architecture, client/server architecture. Docker client interacts with Docker server, Docker server is responsible for building, running and distributing Docker images. Docker clients and servers can run on a single machine or communicate with remote Docker servers through RESTful, STOCK or network interfaces.

This figure shows Docker client, server and Docker repository (namely Docker Hub and Docker Cloud). By default, Docker will search for image files in Docker central repository. This design concept of managing images by repository is similar to Git. Of course, this repository can be specified by modifying the configuration, and we can even create our own private repository.

Docker installation and use

There are some preconditions for the installation and use of Docker, mainly reflected in the support of the architecture and kernel. As for architectures, with the exception of x86-64, which Docker has supported since its inception, support for other architectures is constantly being improved and advanced.

Docker is divided into CE and EE versions. CE is community edition (free, 7-month support cycle) and EE is enterprise edition, which emphasizes security and pays for use, 24-month support cycle.

We can refer to the official documentation for the latest Docker support before installation. The official documentation is here:

https://docs.docker.com/install/
Copy the code

Docker also has certain requirements for the functions supported by the kernel, that is, the configuration options of the kernel (such as must enable Cgroup and Namespace related options, and other network and storage drivers, etc.). Docker source code provides a detection script to detect and guide the configuration of the kernel, script link here:

https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh
Copy the code

Once the prerequisites are met, installation becomes very simple.

Docker CE installation please refer to the official documentation:

  • MacOS:Docs.docker.com/docker-for-…
  • Windows:Docs.docker.com/docker-for-…
  • Ubuntu:Docs.docker.com/install/lin…
  • Debian:Docs.docker.com/install/lin…
  • CentOS:Docs.docker.com/install/lin…
  • Fedora:Docs.docker.com/install/lin…
  • otherLinuxDistribution:Docs.docker.com/install/lin…

Here we use CentOS7 as a demonstration of this article.

Environment to prepare

  • Ali Cloud server (1 core 2G, 1M bandwidth)
  • CentOS 7.4 64

Docker-ce supports 64-bit version of CentOS7 and requires a kernel version no less than 3.10

First we need to uninstall the old version of Docker

$sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-selinux \ docker-engine-selinux \ docker-engine
Copy the code

We execute the following installation command to install dependencies:

$sudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm2
Copy the code

Here I have installed it beforehand, so I am prompted to install the latest version

Install the Docker

The Docker package is included with the default centos-Extras software source. So to install Docker, just run the following yum command

$ sudo yum install docker
Copy the code

Of course, in the test or development environment, Docker official in order to simplify the installation process, provides a set of convenient installation script, CentOS system can use this script installation:

curl -fsSL get.docker.com -o get-docker.sh
sh get-docker.sh
Copy the code

See the docker-install script:

https://github.com/docker/docker-install
Copy the code

After executing this command, the script will automatically do all the preparatory work and install the Edge version of Docker CE on the system.

After the installation is complete, run the following command to verify that the installation is successful:

docker version
or
docker info
Copy the code

If the information about the Docker version is displayed, the Docker is successfully installed

Start the Docker – CE

$ sudo systemctl enable docker
$ sudo systemctl start docker
Copy the code

Simple use of Docker -Hello World

Due to the daily server crash, Docker has some problems, so the following case demonstration is based on Kali Linux environment.

Let’s feel the charm of Docker through the simplest image file hello World!

To grab the image file named hello-world from the repository, run the following command.

docker pull library/hello-world
Copy the code

Library/hell-world is the location of the image file in the repository, where library is the group where the image file is located. Hello-world is the name of the image file.

After successfully fetching, you can see the image file on the machine.

docker images
Copy the code

We can see the following results:

Now we can run the hello-world image file

docker run hello-world
Copy the code

We can see the following results:

After this prompt, hello World stops running and the container terminates automatically. Some containers do not terminate automatically because they provide services, such as Mysql images, etc.

Is it easy? As we can see from the above, Docker is very powerful. In addition, we can also pull in some Ubuntu, Apache and other images, which will be mentioned in future tutorials.

Docker provides a set of simple and practical commands to create and update the image, we can directly download a created application image through the network, and use the Docker RUN command directly. Container can be understood as a lightweight sandbox. Docker uses containers to RUN and isolate applications. Containers can be started, stopped and deleted, which does not affect Docker images.

Take a look at the picture below:

Docker client is the main way for Docker users to interact with Docker. When you run commands using the Docker command line, the Docker client sends these commands to the server, which executes them. The docker command uses the Docker API. Docker clients can communicate with multiple servers.

We will analyze how Docker container works and learn the working principle of Docker container so that we can manage our container by ourselves.

Docker architecture

In the above study, we briefly explained the basic architecture of Docker. I learned that Docker uses C/S architecture, that is, client/server architecture. When Docker client interacts with Docker server, Docker server is responsible for building, running and distributing Docker image. We also know that Docker client and server can run on one machine and can communicate with remote Docker server through RESTful, STOCK or network interfaces.

We can intuitively understand the architecture of Docker from the following figure:

Docker’s core components include:

  1. Docker Client
  2. Docker daemon
  3. Docker Image
  4. Docker Registry
  5. Docker Container

Docker uses the Client/Server architecture. The client sends requests to the server, which builds, runs, and distributes containers. The client and server can run on the same Host, and the client can communicate with the remote server through socket or REST API. Maybe many friends do not understand some things, such as what REST API is, but it does not matter, in the later article will explain to you clearly.

Docker Client

Docker Client, also known as Docker Client. It is actually the command line interface (CLI) tool Docker provides, and is the main way for many Docker users to interact with Docker. Clients can build, run, and stop applications, and interact with Docker_Host remotely. The most commonly used Docker client is the Docker command, we can easily build and run the Docker container on host through the Docker command.

Docker daemon

Docker Daemon is a server component that runs in the way of Linux background service. It is the most core background process of Docker. We also call it daemon. It is responsible for responding to requests from the Docker Client and then translating those requests into system calls to complete container-managed operations. This process starts an API Server in the background, which is responsible for receiving the requests sent by the Docker Client. The received requests are dispatched through a route within the Docker Daemon, and the specific functions execute the requests.

We can roughly divide it into the following three parts:

  • Docker Server
  • Engine
  • Job

The architecture of the Docker Daemon is as follows:

The Docker Daemon receives requests from the Docker Client through the Docker Server module and processes the requests in the Engine. Then, according to the request type, the specified Job is created and run. Docker Daemon runs on Docker host and is responsible for creating, running, monitoring containers, building and storing images.

There are several possibilities for running procedures:

  • toDocker RegistryAccess to the mirror
  • throughgraphdriverPerform localization of the container image
  • throughnetworkdriverPerform the configuration of the container network environment
  • throughexecdriverPerforms execution work that runs inside the container

Since both Docker Daemons and Docker clients are started by the executable Docker, their startup processes are very similar. When the Docker executable is run, the running code distinguishes between the two through different command line flag parameters, and finally runs the corresponding parts of both.

To start the Docker Daemon, run the following command

Docker --daemon = true docker -d -d = trueCopy the code

Then the main() function of Docker resolves the corresponding flag parameters of the above commands, and finally completes the startup of the Docker Daemon.

The following figure shows the startup process of the Docker Daemon:

By default, the Docker Daemon can only respond to client requests from local hosts. To allow remote client requests, you need to enable TCP listening in the configuration file. We can configure it as follows:

1, edit the configuration file/etc/systemd/system/multi – user. Target. Wants/docker. Service, behind the environment variable ExecStart add – H TCP: / / 0.0.0.0, Allows clients from any IP address to connect.

2. Restart the Docker Daemon

systemctl daemon-reload
systemctl restart docker.service
Copy the code

3. We can communicate with the remote server through the following commands

Docker-h Specifies the server IP address infoCopy the code

The -h command is used to specify the host of the Docker server. The info command is used to view the information about the Docker server

Docker Image

Docker image can be regarded as a special file system. In addition to providing programs, libraries, resources, configuration files required by the container runtime, Docker image also contains some configuration parameters prepared for the runtime (such as anonymous volumes, environment variables, users, etc.). The image does not contain any dynamic data and its contents are not changed after the build. Docker images can be viewed as read-only templates through which Docker containers can be created.

Images can be generated in several ways:

  1. Create a mirror from scratch
  2. Download and use an off-the-shelf image that someone else has created
  3. Creates a new image on top of an existing image

We can describe the contents and creation steps of the image in a text file, called Dockerfile, by executing the docker build

command to build a Docker image, in a subsequent tutorial, we will use an article to discuss this issue.

Docker Registry

Docker Registry is the repository of Docker image, and its position in the Docker ecosystem is shown in the figure below:

When docker push, Docker pull, and Docker Search are run, they actually communicate with the Docker Registry through the Docker daemon.

Docker Container

Docker container is the running instance of Docker image, which is the place where the project program is really run, system resources are consumed and services are provided. Docker Container provides the system hardware environment. We can use Docker Images made system disks, together with the project code we have written, to run and provide services.

How do Docker components collaborate to run containers

At this point, I’m sure you’re already familiar with the Docker infrastructure. Do we remember the first container that ran? Now let’s use the Hello-world example to see how the various Docker components work together.

The container startup process is as follows:

  • DockerClient executiondocker runThe command
  • Docker daemonFound no localhello-worldThe mirror
  • daemonDocker HubDownload mirror
  • Download complete, imagehello-worldThe file is saved locally
  • Docker daemonStart the container

The specific process can be seen in the following illustration:

We can see from Docker images that hello-World has been downloaded locally

We can display the running container through docker PS or Docker container ls. We can see that the Hello-world will stop running after the output message, and the container will terminate automatically. Therefore, we did not find any container running when we checked.

We put the Docker container workflow analysis is very clear, we can generally know that Docker components cooperative operation container can be divided into the following processes:

  1. DockerClient executiondocker runThe command
  2. Docker daemonThe image we need is not available locally
  3. daemonDocker HubDownload mirror
  4. After the download is complete, the image is saved to the local PC
  5. Docker daemonStart the container

Now that we understand these processes, it’s not too much of a shock to understand these commands. Let me tell you about some of the commands that Docker uses.

Common Docker commands

We can view the detailed help document of the command through docker-h. I’m just going to talk about some of the commands that we might use a lot in everyday games or in everyday life.

For example, if we need to pull a docker image, we can use the following command:

docker pull image_name
Copy the code

Image_name is the image name, and if we want to download an image from the Docker Hub, we can use the following command:

docker pull centos:latest
Copy the code

Centos :lastest is the name of the image. The Docker Daemon automatically downloads the image from the Docker Hub when it finds that the image is not available locally. After downloading, the image is saved to the /var/lib/docker directory by default.

If we want to check how many mirrors exist on the host, we can use the following command:

docker images
Copy the code

To find out which containers are currently running, use the following command:

docker ps -a
Copy the code

-A displays all current containers, including those that are not running

How do we start, restart, and stop a container? We can use the following command:

docker start container_name/container_id
docker restart container_name/container_id
docker stop container_name/container_id
Copy the code

If we want to enter the container at this point, we can use the attach command:

docker attach container_name/container_id
Copy the code

If we wanted to run the image in the container and call bash inside the image, we could use the following command:

docker run -t -i container_name/container_id /bin/bash
Copy the code

If we want to delete the image, the image cannot be deleted unless the container is destroyed because it is referenced by a container. We first have to stop the container:

docker ps
docker stop container_name/container_id
Copy the code

We then delete the container with the following command:

docker rm container_name/container_id
Copy the code

Then we delete the image:

docker rmi image_name
Copy the code

So much for the common Docker-related commands, which we’ll refer to again and again in future articles.

What is Dockerfile

We’ve already covered some of the basic concepts of Docker. From the CTF player’s point of view, we can use Dockerfile to define the image, rely on the image to run the container, can simulate a real vulnerability scenario. Therefore, there is no doubt that Dockerfile is the key to images and containers, and Dockerfile can also be very easy to define the image content, said so much, so what is Dockerfile in the end?

Dockerfile is a configuration file that automatically builds docker images. Users can use Dockerfile to quickly create customized images. The commands in Dockerfile are very similar to shell commands under Linux.

We can intuitively feel the relationship among Docker image, container and Dockerfile through the following picture.

As we can see from the figure above, Dockerfile can customize the image, through the Docker command to run the image, so as to achieve the purpose of starting the container.

A Dockerfile consists of a line of command statements and supports comment lines that begin with #.

In general, we can divide a Dockerfile into four parts:

  • Basic mirror (parent mirror) information commandFROM
  • Maintainer information directiveMAINTAINER
  • Mirror operation instructionRUNEVNADDWORKDIR
  • Container start instructionCMDENTRYPOINTUSER

Here is a simple example of a Dockerfile:

FROM python:2.7
MAINTAINER Angel_Kitty <[email protected]>
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python"]
CMD ["app.py"]
Copy the code

We can analyze the above process:

  • 1, fromDocker HubpullPython 2.7Base image of
  • 2. Display maintainer information
  • 3,copyCurrent directory into the container/appDirectory to copy the local host<src> ( DockerfileRelative path to the directory) into the container<dest>
  • 4. Specify the working path as/app
  • 5. Install the dependency packages
  • 6, exposed5000port
  • 7, start,app

How to start a Python Flask App Dockerfile (Flask is a lightweight Web framework for Python); how to start a Python Flask app Dockerfile

Common directives for Dockerfile

Dockerfile (Dockerfile); Dockerfile (Dockerfile);

All commands in Dockerfile are in the following format: INSTRUCTION arguments are case insensitive, but uppercase is recommended. Let’s take a formal look at these instruction sets.

FROM

or FORM :

. All dockerfiles start with FROM. The FROM command specifies what image the Dockerfile image is based on, and all subsequent instructions will create images based on the FROM command. You can use the FROM command multiple times to create multiple images in the same Dockerfile. For example, if we wanted to specify the base image for Python 2.7, we could write it as follows:

The FROM python: 2.7Copy the code

MAINTAINER

MAINTAINER Specifies the image creator and contact information. The format is MAINTAINER

. Here I set it to my ID and email:

MAINTAINER Angel_Kitty <[email protected]>
Copy the code

COPY

COPY is used to COPY the localhost < SRC > (the relative path to the directory where the Dockerfile resides) into the container

.

COPY is recommended when a local directory is used as the source directory. The format is COPY < SRC >

. For example, if we want to copy the current directory to the /app directory in the container, we can do something like this:

COPY . /app
Copy the code

WORKDIR

WORKDIR is used to set the current working path together with the RUN, CMD, and ENTRYPOINT commands. You can set it multiple times. If it is a relative path, it is relative to the previous WORKDIR command. The default path is /. The general format is WORKDIR /path/to/work/dir. For example, if we set the /app path, we can do the following:

WORKDIR /app
Copy the code

RUN

RUN is used to execute commands inside a container. Each RUN command adds a change layer to the existing image, which remains unchanged. The general format is RUN . For example, if we want to install python dependencies, we do as follows:

RUN pip install -r requirements.txt
Copy the code

EXPOSE

Using the EXPOSE command, you can specify an EXPOSE port. EXPOSE [ …]

For example, in the example above, open port 5000:

EXPOSE 5000
Copy the code

ENTRYPOINT

ENTRYPOINT allows your container to behave like an executable program. Only one ENTRYPOINT can exist in a Dockerfile. If there are multiple entryPoints, the last one takes effect.

The ENTRYPOINT command also has two formats:

  • ENTRYPOINT ["executable", "param1", "param2"]: Recommendedexec In the form of
  • ENTRYPOINT command param1 param2shellIn the form of

For example, if we want to turn a Python image into an executable program, we can do this:

ENTRYPOINT ["python"]
Copy the code

CMD

The CMD command is used to start the container by default. The CMD command may or may not contain the executable file. If no executable file is included, one is specified with ENTRYPOINT, and the CMD arguments are used as arguments to ENTRYPOINT.

CMD commands have three formats:

  • CMD ["executable","param1","param2"]: RecommendedexecForm.
  • CMD ["param1","param2"]: No executable program form
  • CMD command param1 param2: Shell format.

Only one CMD can exist in a Dockerfile. If there are more than one CMD, the last one takes effect. By default, the shell of CMD calls /bin/sh -c to run the command.

CMD commands are overwritten by arguments passed from the Docker command line: Docker run busybox /bin/echo Hello Docker overwrites CMD commands.

For example, if we want to start /app, we can do it with the following command:

CMD ["app.py"]
Copy the code

Of course, there are other commands, and we’ll go through them as we use them.

Build Dockerfile

Now that we have roughly covered how to write a Dockerfile, we can write an example ourselves:

Mkdir static_web CD static_web touch Dockerfile and vi Dockerfile to edit the file, enter I to edit the file and here is the content of the Dockerfile we built `````````` FROM nginx MAINTAINER Angel_Kitty <[email protected]> RUN echo '<h1>Hello, Docker! > < / h1 > '/ usr/share/nginx/HTML/index. The HTML ` ` ` ` ` ` ` ` ` ` after editing Press the esc exit the editor And then: wq written to exitCopy the code

We execute in the directory where the Dockerfile is located:

docker build -t angelkitty/nginx_web:v1 .
Copy the code

-t sets the repository and name for the new image, where Angelkitty is the repository name, nginx_web is the image name, and :v1 is the tag (not added as latest by default).

After construction, use the Docker images command to check all images. If there is information that REPOSITORY is nginx and TAG is V1, the construction is successful.

Next use the Docker run command to start the container

docker run --name nginx_web -d -p 8080:80	angelkitty/nginx_web:v1
Copy the code

This command will start a container with an nginx image, named nginx_web and mapped to port 8080, so that we can use a browser to access the nginx server: http://localhost:8080/ or http://local IP address :8080/, the following information is displayed:

This is a simple example of using Dockerfile to build an image and run the container!

reference

  • Docker – From getting started to practice
  • Docker tutorial