What can you learn from this article?

  • Understand common Docker knowledge
  • Use Docker fast cross-platform deployment back-end (Node.js + MongoDB + Redis + Nginx) project
  • Some common Linux operations
  • Write a Dockerfile file
  • Write the docker-compose file
  • Write some common nginx configuration files

PS

Container deployment has many advantages, such as easy migration of containers from one computer to another.

If you want to learn more about traditional real machine deployment, check out my article on how to deploy a front-end project to a remote server step by step from scratch

What is a Docker?

In short, it is a tool for packaging, distributing, and deploying applications. It can be thought of as a lightweight virtual machine, but it operates as a container.

Support various systems Linux, MacOS, Windows and so on. Containerized deployment can be used to reduce the cost of deploying projects across different platforms.

No more “what works on my computer doesn’t work on the server.”

Docker basic concepts

You need to understand these basic concepts before using Docker

  • Image
  • Container
  • Repository

The image can be created using a Dockerfile file or downloaded from the dockerHub repository

  • The relationship between images and containers in Docker is like the relationship between classes and instances
  • Images can be generated from Dockerfile files, and containers are created from images

Docker uses domestic image acceleration

Linux system

vim /etc/docker/daemon.json
Copy the code

On Windows, find the daemon.json file and open it

C:\Users< your user name >.docker\daemon.json file

Then modify the Registry -mirrors field to add multiple source addresses. It can be accelerated while downloading the image

{
  "builder": {
    "gc": {
      "defaultKeepStorage": "20GB",
      "enabled": true
    }
  },
  "experimental": false,
  "features": {
    "buildkit": true
  },
  "registry-mirrors": [
    "https://registry.docker-cn.com"
  ]
}
Copy the code
  • Docker China: registry.docker-cn.com
  • USTC: docker.mirrors.ustc.edu.cn
  • Netease: hub-mirror.c.163.com

Hello world

Docker allows you to run applications inside a container. Use the Docker run command to run an application inside a container

Docker run Ubuntu :15.10 /bin/echo "Hello world"Copy the code

If the container does not exist locally, it will be downloaded from the remote repository. This is equivalent to installing an Ubuntu virtual environment inside the Docker container, which can execute various Linux commands.

Interactive container

This is equivalent to opening the console in the container virtual environment.

After the image name is the command, and here we want an interactive Shell, so /bin/bash is used

Docker run -t ubuntu:15.10 /bin/bash # docker run -t ubuntu:15.10 /bin/bashCopy the code
  • -t: specifies a dummy terminal or terminal in the new container.
  • -I: allows you to interact with standard input (STDIN) inside the container.
  • -d: Allows the container to run in the background without entering interactive mode
  • -p: indicates the exposed port
  • Type exit in the console to exit

If the -d parameter is entered, the container will run in the background, so how to enter the container?

One is the Docker attach container ID command. If you enter the container from this command and then enter exit, the whole container will exit and no longer maintain the background running state.

Docker execit /bin/bash docker execit /bin/bash docker execit /bin/bash

The Docker state

Enter the docker ps -a command to view all containers.

To resume a stopped container, enter the docker start container ID. Similarly, to stop a stopped container, enter the Docker stop container ID.

The docker restart container ID command is also used to restart the container

CONTAINER ID IMAGE COMMAND CREATED STATUS
The container ID Image used The command to run when the container is started The creation time of the container State of the container

There are seven container states:

  • Created (already created)
  • Restarting
  • Running or Up
  • Removing (on the move)
  • Paused
  • Exited (stop)
  • He was dead.

Delete a container

Docker rm -f Container IDCopy the code

Cleans up all terminating containers in the list

docker container prune
Copy the code

Look at mirror

REPOSITORY TAG IMAGE ID CREATED SIZE
The repository source of the mirror Mirrored label Mirror ID Image Creation Time Image size

Remove the mirror

Docker rmI test:v0.0.1 docker RMI test:v0.0.1Copy the code

Access to the mirror

When using the Docker run command to run an image that does not exist locally, the image will be automatically downloaded, but you can also use the Docker pull command to download the image in advance

Example: Docker pull Ubuntu :15.10Copy the code

Find the mirror

Just go to hub.docker.com/

Install the software

For example, look for a Redis image in the official website above and install the latest version

docker run -d -p 6379:6379 --name redis redis:latest
Copy the code

The -p command is followed by the host port number: the container port number, that is, mount the container port number to the host port number.

Point the host directory to the directory in the container

Use the -v command

Docker run -p 35:3000 --name my-server -v absolute path :/app -d server:v1Copy the code
  • Bind mount: -v Indicates the absolute path
  • Volumn: -v Let’s call it anything

The command above means to mount the code in an absolute path of the host machine to the app directory in the container, run in the background, the container name is server, and the version number is v1

Container to container communication

Create a virtual network to communicate

docker network create my-net
Copy the code

Once the network is created, you can specify the network in a container, such as starting the Redis container on a My-net network and giving it an alias with –network-alias

docker run -d --name redis --network my-net --network-alias redis redis:latest
Copy the code

docker-compose

You can use Docker composition to combine multiple containers together, and then run multiple containers with one click

For example, The Desktop graphics version of Docker for Windows does not need to be installed separately. For MacOS or Linux, it needs to be installed separately

Run the docker-compose -v command to check whether the docker-compose installation is successful

You need to write a separate Docker-compose script and then write commands to use it

The docker-compose containers all use the same network by default, so there is no need to write the network separately

Docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up: docker-compose up

For details about docker-compose usage, see the following practice section

Docker-compose common command

View the container running status

docker-compose ps -a
Copy the code

Start the container and build

docker-compose up --build -d
Copy the code

Build containers without caching

docker-compose build --no-cache
Copy the code

Remove the container

docker-compose down
Copy the code

Restart the container

docker-compose restart
Copy the code

Stop the container

docker-compose stop
Copy the code

Restarting a single service

docker-compose restart service-name
Copy the code

Enter the command line of a container service

docker-compose exec service-name sh
Copy the code

View the run logs of a container service

docker-compose logs service-name
Copy the code

Practical article

Base deployment

So let’s take a front-end application and create an image, and I wrote a background management system, for example

Docker run -d --name admin --privileged -p 8080:8080 -v ${PWD}/:/admin node:16.14.2 /bin/bash -c "CD /admin && NPM install -g pnpm && pnpm install && pnpm run start"Copy the code

Create a docker container and run it in the background. The privileged command grants the container root privileges and then exposes port 8080 of the container to port 8080 of the host. ${PWD} is the absolute path to the current directory, and the current directory is the root directory of the code. Using node 16.14.2, run the following commands on the console:

CD /admin Go to the /admin directory in the container

Node install -g PNPM global install PNPM package manager (my project used PNPM)

PNPM install Installs dependencies

PNPM run start Starts the project, running on port 8080

If you want to modify the files in the container, you need to use vim, but you may encounter a problem that the container does not have vim command

Install vim apt-get install vimCopy the code

Use Docker for back-end project deployment

First need to prepare a cloud server, the premise is the need to have a server ~~~ here I use Tencent Cloud CentOS7 system, which is also Linux system. Specific need cloud server can be their own Tencent cloud or Ali cloud to buy, if it is a student identity to buy a server on one or two hundred a year. This step of buying a server is not detailed.

Here we mainly talk about how to use local development docker – compose the deployment of a good nginx + node. Js + redis + mongo’s project to the cloud server.

Docker installation

You can log in to the cloud server through the official website of Tencent Cloud or Ali Cloud, or log in to the local console using SSH. Install Docker after successful login. If you don’t know how to connect to a remote server locally, you can read another article on how to deploy a front-end project to a remote server from zero step by step

sudo yum install docker-ce docker-ce-cli containerd.io
Copy the code

When the installation is Complete, the console displays Complete! And then enter docker -v on the console to see the docker version number information:

Docker version 20.10.13, build a224086
Copy the code

If it is not CentOS, you can install it on the official website. Installation address https://docs.docker.com/get-docker/, here to find and their corresponding computer operating system installation

Set boot to start automatically

sudo systemctl enable docker 
Copy the code

Start the Docker

sudo systemctl start docker 
Copy the code

Docker Info = docker Info = Docker Info = Docker Info = Docker Info

Server:
  Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
  Images: 0
  ......
Copy the code

Test installation results

Typing the following command will pull the Hello-world image from dockerHub and start it

sudo docker run hello-world
Copy the code

The console finally shows Hello from Docker! This proves that the installation of Docker is complete.

Docker-compose is installed on the server

Docker-compose is a tool for defining and running multi-container Docker applications. You can set up multiple containers and then start all of them with a single command.

Linux installation, console type the following command:

Sudo curl - L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname - s) - $(uname -m)" - o /usr/local/bin/docker-composeCopy the code

To apply for execution permission after the installation is complete, run the following command:

sudo chmod +x /usr/local/bin/docker-compose
Copy the code

Finally, the console enters the following command to check whether the installation is complete:

Docker-compose version 1.29.2, build 5becea4C docker-compose version 1.29.2, build 5becea4CCopy the code

Installing Git on the server

yum install -y git
Copy the code

Console input after installation is complete

Git version 1.8.3.1Copy the code

This means the Git installation is successful

Initialize the git

Once git is installed, initialize it. When using Git for the first time, we need to configure git user name and email. The user and email can use github or gitLab repository account

Configuring a User Name

Git config --global user.name git config --global user.nameCopy the code

Configure your email

Git config --global user.emailCopy the code

Once this is configured, we can type in and see all of our configuration information, and then see if user.name and user.email are configured correctly

git config -l
Copy the code

Configuring SSH Keys

After configuring the key, you do not need to enter the password again when you push and pull the code on Git, which is convenient.

Ssh-keygen -t rsa -c "Generating public/private RSA key pair" # Generating public/private RSA key pair. # Enter the password # Enter passphrase (empty for no passphrase): # Enter same passphrase again:Copy the code

Your key is stored in /root/.ssh/id_rsa

Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
Copy the code

Next add the private key to the machine and type the command

Ssh-add ~/.ssh/id_rsa # add Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)Copy the code

Then look at the public key, which needs the setting copied to Git.

Cat ~/.ssh/id_rsa.pub # Display a large number of strings, then copy the string as followsCopy the code
  • Click on the Github avatar, and then the second to last is Setting
  • On the left side of the stack of options, find a key iconSSH and GPG keys
  • Then in the SSH Keys panel, click the green one on the rightNew SSH keybutton
  • I’m just going to give you a random name for the note, and then I’m going to paste a bunch of string keys that I just copied in here, and I’m going to hit OK and I’m done

Next enter the following command on the server console to verify that the configuration is successful

ssh -T [email protected]
Copy the code

If the following command is displayed, the configuration is successful, and the git installation is over

The Authenticity of host 'github.com (20.205.243.166)' can't be established. ECDSA key fingerprint is XXXXXXXXXXXXXXXXXX  ECDSA key fingerprint is xxxxxxxxxxxxxxxxxx Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'github.com, 20.205.243.166 (ECDSA) to the list of known hosts. Hi XXXXXXX (name you git)! You've successfully authenticated, but GitHub does not provide shell access.Copy the code

Prepare completed projects

The project I’m using here is a node.js server that references mongodb and Redis, uses Nginx as a gateway and configures a reverse proxy.

Our project structure looks like this, let’s use this directory structure to demonstrate ~~~

| - epidemic - compose | - docker - compose. Yml # write docker - compose orchestration logic | - epidemic - | - mongo # # server node server storage Mongo initialization scripts and as a container directing a mount of data directory | - # nginx nginx configurationCopy the code

Let’s see what a secondary directory looks like

| - epidemic - compose | - docker - compose. Yml | - epidemic - server | | -- commitlint. Config. Js | | - | Dockerfile # write container configuration | - nest - cli. Json | | -- package. Json | | - the env # where you put the environment variable | | -. Dockerignore # inside ignore node_modules | | - PNPM - lock. Yaml | | - README. Md | | - SRC # source location | | -- tsconfig. Build. The json | ` -- tsconfig. Json | - mongo | -- - mongo - volume # used to mount Mongo database data of container | ` -- init - mongo. Js # used to create the initial account mongo | - nginx | ` -- nginx. Conf # write nginx configurationCopy the code

Write a Docker configuration file

Let’s start by looking at how to write the Dockerfile in the October-Server directory, the Dockerfile that will eventually package it into a server-side container, okay

FROM Node :16.14.2-alpine # LABEL Maintainer ="Dachui" # ENV LANG=" c. UTF-8" # Copy the project files to build, # COPY package.json file from project to app/server COPY./package.json /app/server # Yaml /app/server # RUN NPM install -g PNPM install -g PNPM - registry=https://registry.npm.taobao.org # then install pm2 used for server process to protect the RUN PNPM install - g pm2 # installation project relies on the RUN PNPM install # /app/server RUN PNPM RUN build # EXPOSE 3000 port Dist: dist/main.js: dist ["dist ", dist/main.js"]Copy the code

Docker-comemage. yml file, docker compose can orchestrate multiple containers to start them all in one click

Mynet: mynet: mynet: mynet: mynet: mynet: Mongo :latest # container_name: mongo # Mongo /data/db directory directory directory directory directory directory directory directory directory directory directory directory directory directory directory directory directory directory -./mongo/mongo-volume:/data/db # init-mongo.js file will be executed after the mongodb container is initialized. Create default role for database -./mongo/init-mongo.js:/docker-entrypoint-initdb.d/mongo-init.js:ro environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: MONGO_INITDB_ROOT_PASSWORD: MONGO_INITDB_ROOT_PASSWORD MONGO_INITDB_DATABASE: my-database ports: # MONGO_INITDB_DATABASE: MONGO_INITDB_DATABASE: my-database ports: # MONGO_INITDB_DATABASE: my-database ports: Mynet redis: image: redis:latest container_name: redis restart: always environment: -tz =Asia/Shanghai ports: -6379 :6379 # this command is used to create the default password for Redis. In node we use ioredis to connect to Redis. Appendonly yes --requirepass "redispassword" networks: -mynet # server: / external-server container_name: server ports: / external-server container_name: server ports: / external-server container_name: server ports: -3000 :3000 restart: always environment: -tz =Asia/Shanghai # - mongo - redis networks: - mynet nginx: image: nginx:alpine container_name: nginx volumes: # container nginx configuration nginx directory under the current path in nginx. Conf -. / nginx/nginx. Conf: / etc/nginx/nginx. Conf ports: - 80:80 - 443:443 restart: always environment: - TZ=Asia/Shanghai networks: - mynet depends_on: - serverCopy the code

Next let’s look at the init-mongo.js file in the mongo directory

This file is mainly used to set the initialization password for the October-Server database when the Mongo container is generated

MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD in docker-compose. Yml // MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD in db. MONGO_INITDB_DATABASE = db. GetSiblingDB ("my-database"); // MONGO_INITDB_DATABASE = db. Db.createuser ({user: "user1", PWD: "password1", roles: [{// Give the user permission to read and write the my-databse database role: "readWrite", db: "my-database", }, ], });Copy the code

This is how we connect to the database in Node.js using Mongoose

# @ itself is to fill in the domain name, but we use docker container communication here, so we need to fill in the service name, Is above the docker - compose services - > mongo mongo: / / user1: password1 @ mongo/my - databaseCopy the code

Finally, let’s look at the nginx.conf file, which is used to write the nginx configuration service

User nginx (nginx); # Number of nginx processes. It is recommended that the value be set to the total number of CPU cores. worker_processes 1; {# use the Epoll I/O model (if you don't know which polling method Nginx should use, it will automatically choose the one that is best for your operating system). # worker_connections 1024; } HTTP {# enable sendFile on; Tcp_nopush on; tcp_nodelay on; Keepalive_timeout 30; types_hash_max_size 2048; Include /etc/nginx/mime.types; Include /etc/nginx/conf.d/*.conf; Default_type application/octet-stream; # default off, whether to enable gzip on compression; Gzip_types text/plain text/ CSS application/json application/x-javascript text/javascript; Gzip_comp_level 4 is recommended. Gzip_comp_level 4 is recommended. Gzip_level 4 is recommended. Upstream my_server {server address :3000; } server {# listen listen 80; Server_name Specifies the server address. #location / {# if you have a front-end project, you can also find the address of your front-end project # root; # index index.html index.htm; # try_files $uri $uri/ /index.html; #} # When I wrote the Node service, Nginx matches the request/API suffix and proxies the request to the node service on port 3000. Location/API / { The server starts on port 3000 proxy_pass http://my_server/api/; Proxy_set_header Host $Host; proxy_set_header Host $Host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-Port $remote_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }}}Copy the code

Project deployment

Normally there are several ways to deploy local code to a remote server:

  1. After the local connection to the remote server, the local code is directly sent to the server through the file transfer command line, and then all operations are performed on the server
  2. Build all the containers locally and upload them to the DockerHub (usually set to private repository), then pull the Image of the DockerHub on the server
  3. After local development, upload the code to Git, then pull the code on the server through Git, and deploy it

Here we use a third approach to code management:

After the local code is developed, you can upload the code to Git and then pull the code from the git repository on the server. After uploading the code to git repository, clone the SSH link, not the HTTPS link, like the following

Git clone [email protected]: username/repository name.gitCopy the code

Next, create a new project directory under root on your server and go to that directory, then pull the code from Git

CD ~root && mkdir project && CD myproject git clone [email protected]: username/repository nameCopy the code

Enter the following command to view all files and directories in the current folder

ls -a
Copy the code

And then it goes into the warehouse, our warehouse is called Arrive-compose, and then it’s going to CD Arrive-compose into the project folder, ready to build the project

Then run the following command and wait for the installation to complete ~~~ when the project is already accessible ~

docker-compose up -d --build
Copy the code

Finally, our Node.js service is deployed on port 3000, and we use the reverse proxy proxy_pass, so we can access our server directly through port 80. Let’s see what happens

We’re done

Look, as long as we finish the development of the project, and then write the docker configuration file, you can quickly deploy the project from one machine to another machine, I developed here with the Windows system, and then use docker to quickly deploy the project to the Linux system server. It’s not hard to ~ ~ ~

conclusion

We can also add automation in the last step, such as triggering a hook function every time we upload the latest code locally to Git, pulling the latest code from the repository on the server and restarting the container, so that we can achieve a complete set of automatic deployment process.