preface

In traditional cluster deployment, you need to set up the environment on each machine and configure various middleware, which is not only inefficient, but also difficult to ensure the consistency of the environment, and configuration changes need to be changed on each machine.

Docker solves all of these problems. However, the official image is not always sufficient, so you need to build your own image for your application. Building an image can be started interactively, entered into the container, modified, exited, and committed using the COMMIT command, but this way of building an image is not conducive to later maintenance. We usually build the image by editing the Dockerfile and then using the build command. After that, we just need to see the Dockerfile and the image information will be clear at a glance.

How do you deploy a cluster with an image?

  • The first method :(when the cluster cannot connect to the Internet) you can run the save command to save the image file, copy the image file to the machine through the external media, and load the image file through the load command
  • The second way is to push the packaged image to the DockerHub and pull the image directly from the node of the cluster, which is also the easiest way

The body of the


1. What is a Docker image

Those of you who have installed an operating system should be familiar with this concept. We install the operating system, usually by burning an image file to a USB drive, and then we can install the operating system on multiple machines. With Docker images, we can also start containers on different machines, which is the similarity between the two.

Docker and virtual machine are also very similar, the most important difference between them is that virtual machine is based on hardware, while Docker container is based on kernel

1.1. Docker file system layer

Docker image structure is similar to Linux virtualization stack, you can see that the first layer (bottom of the stack) is the Linux kernel, followed by the boot file system. The second layer is the base image, which can be a system of some kind (Centos, Ubuntu, etc.). Based on the base image, you can build more images, third layer, fourth layer, etc. But only the file system at the top of the stack is exposed

1.2. Copy while writing

The file systems in the mirror layer are read-only. When the container is started, the outermost image is copied to the writeable container, where the container runs. Submitting a new image is also a stack crushing process


2. Use Dockerfile to build the image

In the following example, I will build an integrated Nginx image based on a Centos base image

precondition

  • Docker environment is installed on the machine
  • Create a folder (for building the image), create a Dockerfile file (the name must be Dockerfile)

2.1. Pull a Centos image

Docker pull centos // Here I pull the default version, you can also specify the required system and version number docker images // view the obtained imageCopy the code

2.2. The editor Dockerfile

Dockerfile content

  • FROM: Based on a mirror
  • MAINTAINER: Author, and information about the author
  • RUN: Equivalent to executing a command in a container (more on that later)
  • EXPOSE: Indicates an EXPOSE port (if the container is not specified with the -p parameter, port 80 of the container will be randomly mapped to a port on the host computer). If the container is specified with the -p command, the EXPOSE effect will be overridden
FROM centos
MAINTAINER zcl "[email protected]"
RUN yum install -y nginx
RUN echo "I am in your container" >/usr/share/nginx/html/index.html
EXPOSE 80
Copy the code

2.3. Use the build command to build

** Note that the following (dot) ‘.’** is used to specify the location of the Dockerfile. Mine is a dot in the current directory, and can also be replaced with a remote git repository, if the repository has a Dockerfile in its root directory

Docker build-t ="pianozcl/test_nginx:v1".Copy the code

After executing the above command, the instructions in Dockerfile will be executed, and the following new image will be generated

2.4. Start the container

  • -p: port mapping (host port: container port)
  • nginx -g “daemon off;” : nginx startup parameter, nginx is started as a daemon by default, specify this parameter to run in the foreground so that the Docker container can sense the nginx process, without this parameter the container will immediately shut down. Note; Don’t be missed
docker run -d -p 80:80 --name nginx_test pianozcl/test_nginx:v1 nginx -g "daemon off;" / / startCopy the code

Docker ps -l can view the currently started container and port mapping

Open a browser and you can see the information we wrote to the Dockerfile


3. Dockerfile instructions

3.1 CMD

CMD is similar to RUN, followed by instructions that need to be executed. The difference is that RUN is used to build the image, while CMD is the command specified when the container is started

Docker run - I - t pianozcl/centos/bin/bash / / the above command is equivalent to the following -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - written in Dockerfile CMD ["/bin/bash"] Then start the container and execute docker run -i -t pianozcl/centosCopy the code
  • Dockerfile can specify only one CMD command, and if there are multiple CMD commands, only the last one is executed
  • Docker run overrides CMD in Dockerfile if a command is specified

3.2. ENTRYPOINT

ENTRYPOINT and CMD are the same as CMD. The difference is that CMD is overwritten by the parameters of docker run, while ENTRYPOINT receives the parameters of Docker run and can be used together

For example: ENTRYPOINT ["nginx"] docker run -t -i pianozcl/test_nginx -g "daemon off;" / / the above operation is equivalent to the following command -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- docker run - t - I pianozcl/test_nginx nginx -g "daemon off."Copy the code

ENTRYPOINT can also be used in conjunction with CMD

  • When docker run does not specify command parameters, ENTRYPOINT is used in conjunction with CMD
  • When specified, ENTRYPOINT works with command-line arguments, overriding the CMD role
If Dockerfile is: ENTRYPOINT CMD [" nginx "] [" -h "] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- docker run - t - I pianozcl/test_nginx equivalent docker run -t -i pianozcl/test_nginx nginx -h --------------------------------------------- docker run -t -i pianozcl/test_nginx -g "daemon off;" Docker run -t pianozcl/test_nginx nginx -g "daemon off;"Copy the code

3.3. WORKDIR

The working directory where RUN and ENTRYPOINT are executed, such as the following Dockerfile

WORKDIR /home/test1 RUN bundle install // The second command works in the test1 directory WORKDIR /home/test2 ENTRYPOINT ["rackup"] // The fourth command works in the test2 directoryCopy the code

You can also start docker run by overwriting the directory specified by Dockerfile with the -w parameter

sudo docker run -ti -w /var/log ubuntu pwd /var/log				
Copy the code

3.4. ENV

Configuring environment variables is like specifying the prefix of the command directory

Dockerfile ENV JAVA_HOME/home/mybin -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - the RUN test. Sh is equivalent to the RUN/home/mybin/test. The shCopy the code

3.5. The ADD & COPY

In both cases, files in the current build environment are packaged and copied into an image. The difference is that during the ADD replication process, if the original compressed file is extracted and decompressed, COPY is copied intact

Jar /opt/application/app.jar // COPY the jar file from the current directory to the application directory of the containerCopy the code