preface
Thanks to the sudden emergence of Node and the rise of front-end engineering, both the development mode and the development framework, the front-end ecological chain has undergone tremendous changes. At the same time, the front-end slowly began to explore other fields, project deployment is one of them
In the slash-and-burn era, when the NPM run build is executed and the generated product is handed over to the operation and maintenance, the front-end task is completed. The operation and maintenance students write the path of the product into the nginx configuration file on the production server, thus completing the “simple” deployment
With the continuous iteration of the project, the front-end began to find the seriousness of the problem, each time it needed to spend a lot of time on packaging, 5 minutes of development, packaging half an hour is common, in addition to the differences in the developer’s own environment will lead to the final product is also different
But there are ways to do it that are more difficult, such as putting packaging on a remote server or automating deployment with a Git repository
In line with the “byte norm” without boundaries, this paper will start from scratch to realize the front-end automatic deployment process and open the “black box” of project deployment.
The technology stacks involved are as follows:
- docker
- node
- pm2
- shell
- webhook
Git bash git bash git bash git bash
Introduces the docker
Before starting the development, I will introduce the protagonist docker
What is a docker
In short, Docker can flexibly create/destroy/manage multiple “servers” called containers.
You can do everything a server can do in a container, such as running NPM run build packages in a node container, deploying projects in a Nginx container, storing data in a mysql container, and so on
Once docker is installed on the server, it is free to create as many containers as it wants. The Docker logo in the figure above shows the relationship between them. 🐳 is a Docker, and each container above is a container
Install the docker
To facilitate local debugging, you can first install docker locally
Mac:download.docker.com/mac/stable/…
Windows:download.docker.com/win/stable/…
Linux:get.docker.com/
After downloading and installing, click the Docker icon to start docker, and docker-related operations can be used in the terminal at this time
Check whether the Docker application starts properly if the following conditions occur
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? .Copy the code
The basic concept
Docker has three important concepts
- Image
- Container
- Repository
If containers are compared to lightweight servers, then images are the template for creating them. A Docker image can create multiple containers, and their relationship is similar to the relationship between classes and instances in JavaScript
There are two ways to obtain an image
- Dockerfile file is created
- Use an existing image directly on dockerHub or another repository
Dockerfile
Dockerfile is a configuration file, similar to.gitlab-ci.yml/package.json, which defines how to generate an image
Try creating a Docker image using a Dockerfile
Create a file
First create a hello-docker directory and create index.html and Dockerfile files in the directory
<! --index.html-->
<h1>Hello docker</h1>
Copy the code
# Dockerfile
FROM nginx
COPY index.html /usr/share/nginx/html/index.html
EXPOSE 80
Copy the code
- FROM nginx: Based on the official Nginx image
- COPY the index. The HTML/usr/share/nginx/HTML/index. HTML: To the current directory index. HTML replacement container/usr/share/nginx/HTML/index. The HTML file, / usr/share/nginx/HTML is the container deposit nginx default page file directory, Accessing container port 80 will display the index.html file in this directory
- EXPOSE 80: The container exposes port 80 to the public. The real port mapping needs to be defined when the container is created
Refer to the official documentation for other Dockerfile configurations
At this point, your file structure should be
hello-docker
|____index.html
|____Dockerfile
Copy the code
Create a mirror image
After creating the Dockerfile file, run the following command in the current directory to create a Docker image
docker build . -t test-image:latest
Copy the code
- Build: Creates a Docker image
- . : Use the dockerfile file in the current directory
- -t: indicates that the version is tagged
- Test-image :latest: indicates the name to be created
test-image
And mark it as the latest version
View all images using the Docker images command
Create a container
After the image is successfully created, run the following command to create a Docker container
docker run -d -p 80:80 --name test-container test-image:latest
Copy the code
- Run: Creates and runs the Docker container
- -d: indicates the background running container
- 80:80: Maps port 80 (before colon) of the current server to port 80 (after colon) of the container.
- –name: Name the container for locating the container later
- The test – image: latest: based on the
test-image
The latest version of the image creation container
Use the docker ps -a command to view all containers
Because local port 80 maps to port 80 of the container, the contents of the index.html file are displayed when you enter localhost
dockerHub
If Github is the repository for code, DockerHub is the repository for images
Developers can upload images generated by Dockerfile to DockerHub to store custom images, or directly use official images
docker pull nginx
docker run -d -p 81:80 --name nginx-container nginx
Copy the code
The first step is to pull the official Nginx image. The second step is to create a container named nginx-container based on the official Nginx image
Port 81 is used to map to port 80 of the container. If you visit localhost:81, you can see the nginx startup page
Why docker
After understanding the concept and usage of Docker, why do we use Docker
Some people ask, why do I put my environment in a container when I can put it on my own server? Here are a few advantages of using Docker
Environment unified
Docker solves a century-old conundrum: it’s fine on my computer 🙂
Developers can upload the docker image of the development environment to the Docker warehouse, and pull and run the same image in the production environment to keep the environment consistent
docker push yeyan1996/docker-test-image:latest
Locally submit an image named docker-test-image. The image name must be prefixed with the DockerHub account
docker pull yeyan1996/docker-test-image:latest
The server pulls the docker-test-image image under account Yeyan1996
Easy to roll back
Like Git, Docker has version control
When creating an image, you can tag the version and quickly roll back to the previous version if there is a problem with one version’s environment
Environmental isolation
Using Docker makes your server cleaner and allows you to build environments in containers
Be efficient and save resources
Compared to real servers/virtual machines, containers contain no operating system, which means that creating/destroying containers is very efficient
Automated front-end deployment
After introducing Docker, we implemented front end automation deployment from scratch
Before migrating Docker, if I want to update the content in the online website, I need to:
- Run locally
npm run build
Generate build products - The product through FTP and other forms to the server
git push
Commit code to the repository
After automated front-end deployment:
git push
Commit code to the repository- The server automatically updates the image
- Run automatically in the image
npm run build
Generate build products - The server automatically creates containers
As you can see, all the developer needs to do is push the code to the repository, and the rest can be done through automated scripts on the server
Cloud server
My free application server expired...
First you have to have a server. –
Since it is a personal project, the requirements for cloud server are not high, most suppliers will give new users a free trial of 1-2 weeks, here I choose Tencent cloud CentOS 7.6 64-bit operating system, of course, Ali cloud or other cloud servers are also completely OK
Landing cloud server
Readers who are familiar with cloud server configuration or are not Tencent Cloud can skip this chapter
The operations related to registration are not detailed, refer to the supplier’s tutorial, and then log in to the console to see the public IP address of the current cloud server. For example, the public IP address of the server in the following figure is 118.89.244.45
Public IP Address Used by Webhook to send requests
Then we need to land cloud server, there are generally two ways to land cloud server locally, password login and SSH login (or SSH tool, Windows system can use Xhell, macOS can use putty)
The former does not need to be configured, but you need to enter the account password every time you log in. The latter needs to register the SSH key, but you can log in to the land cloud server without secret. Personally, I prefer the latter, so I register the SSH key at the console first
The way to generate the key is the same as git. If you have generated the key before, you can run the following command locally to view it
less ~/.ssh/id_rsa.pub
Copy the code
If no key has been generated, run the following command to generate an SSH public key
$ ssh-keygen -o
Generating public/private rsa key pair.
Enter file in which to save the key (/home/schacon/.ssh/id_rsa):
Created directory '/home/schacon/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/schacon/.ssh/id_rsa.
Your public key has been saved in /home/schacon/.ssh/id_rsa.pub.
The key fingerprint is:
d0:82:24:8e:d7:f1:bb:9b:33:53:96:93:49:da:9b:e3 [email protected]
Copy the code
$ cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaCxxxxxxxxxxxxxxxxxxxxxxxxBWDSU
GPl+nafzlHDTYxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxPppSwg0cda3
Pbv7kOdJ/MxxxxxxxxxxxxxxxxxxxxxxxxxxxQwdsdMFvSlVK/7XA
t3FaoJoxxxxxxxxxxxxxxxxxxxxx88XypNDvjYNby6vw/Pb0rwert/En
mZ+AW4OZPnTxxxxxxxxxxxxxxxxxxo1d01QraTlMqVSsbx
NrRFi9wrf+M7Q== [email protected]
Copy the code
Place the generated public key in the cloud server console icon and click OK
In addition to registering the public key, you also need to bind it to the instance, shut it down, and bind it
After binding, restart the server, and you can log in to the cloud server using SSH
ssh <username>@<hostname or IP address>
Copy the code
Installation environment
Then install the basic environment for the cloud server
docker
Docker has been installed locally before, but it is not available on the cloud server by default, so it needs to install the Docker environment for it
There are some differences between cloud server installation and local installation. According to the docker official website installation tutorial, run the following command
#Step 1: Install the necessary system tools
sudo yum install -y yum-utils
#Step 2: Add software source information and use Ali Cloud image
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#Step 3: Install docker-CE
sudo yum install docker-ce docker-ce-cli containerd.io
#Step 4: Enable the Docker service
sudo systemctl start docker
#Step 5: Run the Hello-world project
sudo docker run hello-world
Copy the code
Pop-up Hello from Docker! Docker has been successfully installed
git
Automated deployment involves pulling up to date code, so you need to install a Git environment
yum install git
Copy the code
In SSH mode, a public key needs to be registered on Github. Therefore, HTTPS is used to clone the repository
node
Since it is front-end automatic deployment, the relevant processing logic on the cloud server is written in JS, so it is necessary to install the Node environment, where NVM is used to manage the node version
The curl - o - https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bashCopy the code
You then need to use NVM as an environment variable
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
Copy the code
Install the latest version of Node using the NVM
nvm install node
Copy the code
Once Node is installed, install PM2, which enables your JS scripts to run in the background on the cloud server
npm i pm2 -g
Copy the code
Creating a Demo Project
Simply create the project locally using vue-CLI
vue create docker-test
Copy the code
And upload the demo project to Github, ready to configure Webhook
webhook
The hook is called back.
Referring to the Vue lifecycle, a mounted hook is triggered when a component is mounted. In the hook, you can write callback logic such as pulling back data or rendering pages. Github’s WebHook sends a POST HTTP request when the current repository triggers some event
By pointing the Webhook request address to the cloud server IP address when the repository has a submission code, the cloud server can know that the project has been updated and then run the relevant code to automate the deployment
Configuration webhook
Open the Github repository home page and click Settings on the right
-
Payload URL: Enter the public IP address of the cloud server. Remember to add the HTTP (s) prefix
-
Content Type: Select Application/JSON to send a POST request in JSON format
-
Trigger time: Just the push event, i.e. warehouse push event. Other events can be selected according to different requirements, such as PR, Commit, issue, etc
Webhook can also set some authentication related tokens, which are not detailed here because it is a personal project
Click Add Add a webhook webhook for the current project, at this point, when the docker – test project has submit code, will send a post request to http://118.89.244.45:3000
Test webhook
Once configured, you can submit a COMMIT to the repository, and click on the bottom to see the POST request parameters
This parameter refers to the current repository and locally submitted information. Here we only use repository.name to obtain the updated repository name
Process project update requests
When the cloud server receives the POST request sent after the project is updated, an image needs to be created/updated for automated deployment
Create Dockerfile
Create a new Dockerfile in the local project to be used to create the image later
# dockerfile
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY.RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx"."-g"."daemon off;"]
Copy the code
Parse the configuration line by line:
- FROM node:lts-alpine as build-stage: based on node
lts-alpine
Version mirroring, and by naming the build phase, naming the phase with the Node environment asbuild-stage
(The image version that includes Alpine is smaller than the latest version, making it a better docker image) - WORKDIR /app: Set the workspace to /app and isolate it from other system files
- Json./ : COPY package.json/package-lock.json to the /app directory of the container
- RUN NPM install: Runs
npm install
Install dependencies in the container - COPY.. : Copies other files to the container /app directory in two copies to keep node_modules the same
- RUN NPM RUN build: runs
npm run build
Build in a container
This uses a trick of Docker’s: multi-stage builds
The build is divided into two phases. The first phase is based on Node images and the second phase is based on Nginx images
- FROM nginx: lts-Alpine as production-stage: Based on nginx
stable-alpine
Version mirroring, and naming the phase that has the NGINx environmentproduction-stage
- COPY –from=build-stage /app/dist /usr/share/nginx/ HTML: COPY –form =build-stage /app/dist /usr/share/nginx/ HTML: COPY –form =build-stage /app/dist /usr/share/nginx/ HTML: COPY –form =build-stage /app/dist /usr/share/nginx/ HTML
- EXPOSE 80: The container exposes port 80 to the public
- CMD [“nginx”, “-g”, “daemon off;”] : runs when the container is created
nginx -g daemon off
Command,Once the CMD command ends, the container is destroyed
So let nginx always run in the foreground with daemon off
Finally, copy the Dockerfile file to the cloud server by SCP command
SCP. / Dockerfile [email protected]: / rootCopy the code
Create the dockerignore
Similar to.gitignore,.dockerignore can ignore copying certain files when creating mirror replicas
Create a new.dockerignore in your local project
# .dockerignore
node_modules
Copy the code
Because of the need to keep the node_module dependencies locally and in the container consistent, the COPY command was used twice to create the Dockerfile
The first time just copy package.json and package-lock.json and install the dependencies
Copy all files except node_modules for the second time
Then copy the.dockerignore file to the cloud server as well
SCP. /. Dockerignore [email protected]: / rootCopy the code
Creating an HTTP Server
Since we are front-end development, node is used to start a simple HTTP server to handle POST requests sent by Webhook
Create index.js in your local project
const http = require("http")
http.createServer((req, res) = > {
console.log('receive request')
console.log(req.url)
if (req.method === 'POST' && req.url === '/') {
/ /...
}
res.end('ok')
}).listen(3000, () = > {console.log('server is ready')})Copy the code
Pull warehouse code
When the project is updated, the cloud server needs to pull the latest repository code first
const http = require("http")
+ const {execSync} = require("child_process")
+ const path = require("path")
+ const fs = require("fs")
+ // Recursively delete directories
+ function deleteFolderRecursive(path) {
+ if( fs.existsSync(path) ) {
+ fs.readdirSync(path).forEach(function(file) {
+ const curPath = path + "/" + file;
+ if(fs.statSync(curPath).isDirectory()) { // recurse
+ deleteFolderRecursive(curPath);
+ } else { // delete file
+ fs.unlinkSync(curPath);
+}
+});
+ fs.rmdirSync(path);
+}
+}
+ const resolvePost = req =>
+ new Promise(resolve => {
+ let chunk = "";
+ req.on("data", data => {
+ chunk += data;
+});
+ req.on("end", () => {
+ resolve(JSON.parse(chunk));
+});
+});
http.createServer(async (req, res) => {
console.log('receive request')
console.log(req.url)
if (req.method === 'POST' && req.url === '/') {
+ const data = await resolvePost(req);
+ const projectDir = path.resolve(`./${data.repository.name}`)
+ deleteFolderRecursive(projectDir)
+ // Pull the latest repository code
+ execSync(`git clone https://github.com/yeyan1996/${data.repository.name}.git ${projectDir}`,{
+ stdio:'inherit',
+})
}
res.end('ok')
}).listen(3000, () => {
console.log('server is ready')
})
Copy the code
Data.repository. Name is an attribute in Webhook that records the repository name
Create images and containers
The docker command is used to destroy the old container before creating a new one.
docker ps -a -f “name=^docker” –format=”{{.Names}}”
View all Docker containers whose names start with docker and print only the container name
docker stop docker-container
Stop the container whose name is docker-container
docker rm docker-container
Delete docker-container (docker-container)
Then add the docker-related logic to index.js
const http = require("http") const {execSync} = require("child_process") const fs = require("fs") const path = Function deleteFolderRecursive(path) {if(fs.existssync (path)) {function deleteFolderRecursive(path) {if(fs.existssync (path)) { fs.readdirSync(path).forEach(function(file) { const curPath = path + "/" + file; if(fs.statSync(curPath).isDirectory()) { // recurse deleteFolderRecursive(curPath); } else { // delete file fs.unlinkSync(curPath); }}); fs.rmdirSync(path); } } const resolvePost = req => new Promise(resolve => { let chunk = ""; req.on("data", data => { chunk += data; }); req.on("end", () => { resolve(JSON.parse(chunk)); }); }); http.createServer(async (req, res) => { console.log('receive request') console.log(req.url) if (req.method=== 'POST' && req.url === '/') {const data = await resolvePost(req); Const projectDir = path.resolve('./${data.repository. Name} ') deleteFolderRecursive(projectDir) // Pull the latest repository code execSync(' git ') clone https://github.com/yeyan1996/${data.repository.name}.git ${projectDir}`,{ stdio:'inherit', })+ // Copy the Dockerfile to the project directory
+ fs.copyFileSync(path.resolve(`./Dockerfile`), path.resolve(projectDir,'./Dockerfile'))
+ // Copy.dockerignore to the project directory
+ fs.copyFileSync(path.resolve(__dirname,`./.dockerignore`), path.resolve(projectDir, './.dockerignore'))
+ // Create a Docker image
+ execSync(`docker build . -t ${data.repository.name}-image:latest `,{
+ stdio:'inherit',
+ cwd: projectDir
+})
+ // Destroy the Docker container
+ execSync(`docker ps -a -f "name=^${data.repository.name}-container" --format="{{.Names}}" | xargs -r docker stop | xargs -r docker rm`, {
+ stdio: 'inherit',
+})
+ // Create a docker container
+ execSync(`docker run -d -p 8888:80 --name ${data.repository.name}-container ${data.repository.name}-image:latest`, {
+ stdio:'inherit',
+})
+ console.log('deploy success')
res.end('ok')
}
}).listen(3000, () => {
console.log('server is ready')
})
Copy the code
In the part of destroying docker containers, Linux pipe operator and xargs command are used to filter out containers starting with Docker-test (containers created with the image of the docker-test repository code), stop, delete and re-create them
The SCP is also used to copy to the cloud server
SCP. / index. Js [email protected]: / rootCopy the code
Running the Node script
Run index.js as a background script on the cloud server from the previously installed PM2
pm2 start index.js
Copy the code
After the demo project is successfully started, access port 8888 of the cloud server and view the demo project. (Before accessing the cloud server, ensure that port 8888 is enabled.)
try it
To see if the automated deployment process works
First, run pM2 logs on the cloud server to check the log output of index.js, then add Hello Docker copy locally and push it to Github
Not surprisingly, PM2 prints logs of the clone project
After cloning, put Dockerfile and.dockerignore into the project file and update the image
The old container is then destroyed and created using the updated image
Finally, access port 8888 to see the updated copy
Done ~
The source code
Docker-test
Pay attention to Dockerfile,.dockerignore, index.js files
Write in the back
The demo above only creates a single Docker container. When the project is updated, the page cannot be accessed for a period of time because the container needs to go through the process of destruction and creation
In actual production, multiple containers will be created, and each container will be updated gradually. In coordination with load balancing, user requests will be mapped to containers of different ports to ensure that online services will not break down due to container updates
There are also very mature CI/CD tools based on the Github platform, for example
- travis-ci
- circleci
The YML configuration file simplifies the steps described above for registering webhooks and writing the index.js script to update the container
# .travis.yml
language: node_js
node_js:
- 8
branchs:
only:
- master
cache:
directories:
- node_modules
install:
- yarn install
scripts:
- yarn test
- yarn build
Copy the code
In addition, with the increase of environment, containers will gradually increase, Docker also launched a better way to manage multiple containers docker-compose
However, the purpose of this article is to explore the principles behind it, and the above platforms are recommended for maintaining mature open source projects
Thank you to see here, I hope to help you ~
The resources
Docker for the front-end combat tutorial
Sanyuki’s trivial blog entries
Dockerize Vue.js App
Best practices for writing Dockerfiles