Original address:Github.com/rccoder/blo…

Maybe the shock department should have called it: “Get to know Docker, Just read this!”

sequence

Docker has been widely concerned by various companies since open source. Perhaps the operation and maintenance system of Internet companies is not bearing Docker (or Pouch, etc.), but not their own Internet companies.

This article will briefly introduce the basic concepts of Docker, the way of entry level use and some scenarios in which Docker can greatly improve efficiency.

The principle of

The most simple and wrong perception of Docker is that “Docker is a virtual machine with very good performance”.

As mentioned above, this is a bit of a myth. Compared with traditional virtual machine, Docker technology is much more advanced. Specifically, Docker does not virtual a set of hardware on the host and then virtual an operating system, but allows the process inside the Docker container to run directly on the host (Docker can do the isolation of files and networks, etc.). As a result, Docker will be “lighter, faster, and can be created with more data on the same host”.

Docker has three core concepts: Image, Container, and Repository.

  • The concept of mirroring will be familiar to programmers with a “good guy card” bent. But in contrast to iso images like Windows, images in Docker are layered and reusable, rather than simply a bunch of files stacked on top of each other (similar to the difference between a compressed source package and a Git repository).

  • Container: A Container cannot exist without the support of images. It is a carrier of the image runtime (similar to the relationship between instances and classes). Based on the virtualization technology of Docker, it creates independent “Spaces” for containers, such as ports, processes, and files. Container is a “Container” that is isolated from hosts. Containers can communicate with hosts through ports, volumes, and networks.

  • Repository: A Docker Repository is similar to a Git Repository, with a Repository name and tag. After the image is built locally, it can be distributed through the repository. The commonly used Docker hub such as https://hub.docker.com/ and https://cr.console.aliyun.com/.

Relevant command

1. Install

Docker installation is very convenient, in macOS, Ubuntu, etc., there are one-click installation tools or scripts. See the official Docker tutorial for more.

After installation, click Docker in Terminal. If there are instructions, the installation has been successful in most cases.

2. Find the base image

DockerHub and other sites provide many images, and generally we will find an image from it as the base image, and then proceed with our subsequent operations.

Here we use the Ubuntu base image as an example to configure a Node environment.

Due to “too long link”, it may be slow to access Docker Hub in China, so you can use the mirror accelerator provided by many domestic manufacturers

3. Pull the base image

Use the docker pull command to pull the image from the relevant hub site to the local. At the same time, in the process of pulling, you can see that the mirror is pulled according to multiple “layers”.

>Docker pull ubuntu: 18.0418.04: Pulling from Library/Ubuntu C448d9b1e62f: Pull complete 0277FE36251D: Pull complete 6591defe1cd9: Pull complete 2c321da2a3ae: Pull complete 08d8a7c0ac3c: Pull complete Digest: Sha256:2152 a8e6c0d13634c14aef08b6cc74cbc0ad10e4293e53d2118550a52f3064d1 Status: Downloaded newer image for ubuntu: 18.04Copy the code

Execute Docker images to see all local images

> docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
ubuntu                   18.04               58c12a55082a        44 hours ago        79MB
Copy the code

Create a Docker container

The docker create command creates a container by mirroring it, spitting out the container ID.

>Docker create --name ubuntuContainer Ubuntu :18.04
0da83bc6515ea1df100c32cccaddc070199b72263663437b8fe424aadccf4778
Copy the code

To run the change container, use Docker start.

> docker start ubuntuContainer
Copy the code

Use Docker PS to view running Containers

> docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9298a27262da Ubuntu :18.04 "/bin/bash" 4 minutes ago Up About a minute ubuntuContainerCopy the code

Use Docker Exec to enter the Container.

> docker exec -it 9298
root@9298a27262da:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@9298a27262da:/# exit
Copy the code

With Docker Run, you can create and run a container in one step, and then enter the container.

>Docker runit --name runUbuntuContainer Ubuntu :18.04 /bin/bash
root@57cdd61d4383:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@57cdd61d4383:/#

#Docker ps can check that runUbuntuContainer has been successfully run
> docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 57CDD61D4383 Ubuntu :18.04 "/bin/bash" 9 seconds ago Up 8 seconds RunUbuntuContainer 9298a27262da Ubuntu :18.04 "/bin/bash" 9 minutes ago Up 6 minutes ubuntuContainerCopy the code

5. Install the Node environment in the container

Once inside the container, everything works as normal. Let’s install a simple Node environment

> apt to get the update > apt - get the install wget > wget - qO - https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bashNVM commands cannot be read in the current session after installation
> nvm install 8.0.0
> node -v
Copy the code

6. Commit the container to create a new image

As with Ghost for Windows, many times you want to customize your own image, install some base environment (such as Node above), and then create your own base image. This is where the Docker Commit comes in handy.

> docker commit --author "rccoder" --message "curl+node" 9298 rccoder/myworkspace:v1
sha256:68e83119eefa0bfdc8e523ab4d16c8cf76770dbb08bad1e32af1c872735e6f71

#You can see the newly made RCcoder/MyWorkspace lying here via Docker Images
>docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
rccoder/myworkspace      v1              e0d73563fae8        20 seconds ago      196MB
Copy the code

Now, let’s try our new image, okay?

> docker run -it --name newWorkSpace rccoder/myworkspace:v1 /bin/bashRoot @ 9109 f6985735: / # node - v 8.0.0Copy the code

Looks fine.

7. Push image to Docker Hub

Mirror made good, how to share out for others to use? Here take push to docker Hub as an example.

The first step is to register an account in Docker Hub, and then log in to the account on the terminal for push.

> docker login
> docker push rccoder/myworkspace:v1
The push refers to repository [docker.io/rccoder/myworkspace]
c0913fec0e19: Pushing [=>                                                 ]  2.783MB/116.7MB
bb1eed35aacf: Mounted from library/ubuntu
5fc1dce434ba: Mounted from library/ubuntu
c4f90a44515b: Mounted from library/ubuntu
a792400561d8: Mounted from library/ubuntu
6a4e481d02df: Waiting
Copy the code

8. It’s time to use Dockerfile

Continuous integration with Docker? It’s surprising that you need to copy code from somewhere and execute it (yes, that sounds a bit Travis -ci), as opposed to hearing about it before you knew About Docker.

It’s time for the Dockerfile!

Dockerfile is a script made up of a bunch of commands and arguments. Using a Docker build, you can execute the script to build the image, automatically doing something (similar to.travis. Yml in Travis -ci).

Dockerfiles are all in the following format:

# Comment
INSTRUCTION arguments
Copy the code

The base image must be specified starting with FROM BASE_IMAGE.

Please refer to the Dockerfile Reference for more detailed specifications and instructions. Here we use the base image based on the above RCcoder/MyWorkspace :v1, and then create directory A in the root directory

Dockerfile is as follows:

FROM rccoder/myworkspace:v1
RUN mkdir a
Copy the code

Then execute:

> docker build -t newfiledocker:v1 .Sending build context to Docker Daemon 3.584kB Step 1/2: FROM rccoder/myworkspace:v1 ---> 68e83119eefa
Step 2/2 : RUN mkdir a
 ---> Running in 1127aff5fbd3
Removing intermediate container 1127aff5fbd3
 ---> 25a8a5418af0
Successfully built 25a8a5418af0
Successfully tagged newfiledocker:v1

#Create a new container based on NewFileDocker and open it in terminal.
> docker docker run -it newfiledocker:v1 /bin/bash
root@e3bd8ca19ffc:/# ls
a  bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
Copy the code

With the power of Dockerfiles, Docker leaves endless possibilities.

What can do

With all that said, what can Docker do in a real production environment? Common ones may include the following (feel free to add in the comments)

1. Switch between multiple environments

In business development, it is often necessary to distinguish the development environment from the online environment. Docker can be used to transfer the code and environment in the development environment to the online environment intact and pollution-free, and automatic release can be realized with certain automatic processes.

2. Front-end cloud construction

Because of node_modules, different developers in the same repository often run into different versions of packages that they don’t even know are different from others, which eventually leads to online problems after release. Docker can be used to build new containers in the cloud, remote pollution-free and low-cost code construction, so that different people must use the same version.

Why DON’t I use Shrinkwrap (Lock)?

3. One-click configuration in complex environments

In some scenarios, some super complex environments may be matched (such as Java environment for freshmen). At this time, Docker can be used to encapsulate the environment configuration and directly generate images for low-cost use.

4. Continuous integration unit testing

Something like Travis – Ci

5. Isolate applications from multiple versions and files

For example, this project relies on node6 and that project relies on Node 8 (for example, nodeInstall is recommended if the hard disk is large enough). 100 wordpress applications running on the same server (Docker can be used to create isolation to prevent mutual contamination).

4. Save money

Um, low cost safety overbooking (fog)

Refer to the link

  • Use the Docker command line
  • Dockerfile reference
  • Best practices for writing Dockerfiles

Original address:Github.com/rccoder/blo…