background

In the new company for this period of time, learned a lot of knowledge, after all, before his on the development work has been going it alone (design architecture, management project, handle the problem by themselves, their learning skills), although it sounds good, the freedom of the front of the construction and management of project, but, when I came to work in some large Internet companies, I found it was a real waste of time. Working on a larger technical team, you can learn a lot of new skills, tools, architectures, languages, and conventions more efficiently. Later I will slowly write here some of the new knowledge I have learned to share with you.

Docker

What does Docker do?

Popular understanding: Docker is an open source application container engine, which can flexibly create/destroy/manage multiple containers. These containers completely use the sandbox mechanism, without any interface between each other (similar to iPhone app), and the performance cost is very low. You can do everything a server can do in a container, such as running NPM run build packages in a node container, deploying projects in a Nginx container, storing data in a mysql container, and so on. When Docker is installed on the server, you are free to create as many containers as you want. Docker allows developers to package their applications and dependencies into a lightweight, portable container that can then be distributed to any popular Linux machine, as well as virtualization.

Install the Docker

Take a Mac Pro laptop.

Log in to Aliyun

Open the terminal and enter aliyun login command:

Ssh-t Ali cloud username @Ali cloud public IP address -p portCopy the code

Enter ali cloud server password, when you see the following prompt it means login ali cloud remote server success!

Welcome to Alibaba Cloud Elastic Compute Service !
Copy the code

According to the official installation tutorial, execute the following command:

Executing installation Commands
# Step 1Sudo yum install -y yum-utils # Step2Sudo yum-config-manager --add-repo HTTP://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3Docker-ce sudo yum install docker-ce docker-ce cli containerd. IO # Step4Sudo systemctl start docker # Step5Sudo docker run hello-worldCopy the code

If the following figure is displayed, the installation is successful.

Install git

Automated deployment involves pulling up to date code, so you need to have Git installed on the server

yum install git
Copy the code
Install the node

Since it is front-end automatic deployment, the relevant processing logic on the cloud server is written in JS, so node environment needs to be installed.

wget https:/ / nodejs.org/dist/v14.17.1/node-v14.17.1-linux-x64.tar.xz
Copy the code
tar xvf node-v1417.1.-linux-x64.tar.xz
Copy the code
ln -s /root/node-v1417.1.-linux-x64/bin/node /usr/local/bin/node 
ln -s /root/node-v1417.1.-linux-x64/bin/npm /usr/local/bin/npm
Copy the code
node -v npm -v
Copy the code

At this point, the Node.js environment is installed. The software is installed in the /root/node-v14.17.1-linux-x64/ directory by default. To install the software in another directory, such as /opt/node/, perform the following operations:

mkdir -p /opt/node/
mv /root/node-v1417.1.-linux-x64/* /opt/node/
rm -f /usr/local/bin/node
rm -f /usr/local/bin/npm
ln -s /opt/node/bin/node /usr/local/bin/node
ln -s /opt/node/bin/npm /usr/local/bin/npm
Copy the code

Install the pm2

Once Node is installed, install PM2, which enables your JS scripts to run in the background on the cloud server

npm i pm2 -g
Copy the code

If the PM2 cannot be found, execute:

ln -s /root/node-v1417.1.-linux-x64/lib/node_modules/pm2/bin/pm2 /usr/local/bin
Copy the code

Making configuration webhook

To understand: Webhook is actually a hook. When your warehouse is submitted for update, it will trigger the WebHook hook to call the interface address configured by the Payload URL, and tell ali Cloud server the corresponding container created by Docker to perform a series of operations and deployment according to the content in the DockerFile file.

Log on to making

Click the front-end project repository you want to deploy

Click the Add Webhook button in the upper right corner

Test webhook

After the configuration is complete, you can submit a COMMIT to the repository. After the push, open webhook and you will find the following figure, indicating that the configuration is successful

The project root directory creates the Dockerfile

Create an image from the contents of the Dockerfile.

Step 1: Build FROM Node :lts-alpineasWORKDIR /app / COPY yarn.lock. RUN yarn cache clean # Download dependencies RUN yarn COPY.. # build RUN Yarn Build # 2: create nginx FROM nginx: stables -alpineasProduction-stage # COPY the build-stage product from the app/dist folder to /usr/share/nginx/html/copy --from=build-stage /app/dist /usr/share/nginx/html/
EXPOSE 80
CMD ["nginx"."-g"."daemon off;"]
Copy the code

scp ./Dockerfile root@118.89244.45.:/root
Copy the code
The root directory creates.dockerignore

End: Similar to.gitignore,.dockerignore can ignore copying certain files when creating mirror copy files

# .dockerignore
node_modules
Copy the code

Json and package-lock.json are only copied in the first COPY command, because the node_module dependencies in the local and container are consistent. Install all files that depend on the second replication except node_modules and then copy the.dockerignore file to the cloud server as well

scp ./.dockerignore root@118.89244.45.:/root
Copy the code

Create images and containers

The docker command is used to destroy the old container before creating a new one.

docker ps -a -f “name=^docker” –format=”{{.Names}}”

View all Docker containers whose names start with docker and print only the container name

docker stop docker-container

Stop the container whose name is docker-container

docker rm docker-container

Delete docker-container (docker-container)

Creating an HTTP Server

Root directory new index.js Port 3000 must be the Payload URL configured in webhook.

const http = require("http")
const {execSync} = require("child_process")
const fs = require("fs")
const path = require("path")

// Delete directories recursively
function deleteFolderRecursive(path) {
    if( fs.existsSync(path) ) {
        fs.readdirSync(path).forEach(function(file) {
            const curPath = path + "/" + file;
            if(fs.statSync(curPath).isDirectory()) { // recurse
                deleteFolderRecursive(curPath);
            } else { // delete filefs.unlinkSync(curPath); }}); fs.rmdirSync(path); }}const resolvePost = req= >
    new Promise(resolve= > {
        let chunk = "";
        req.on("data".data= > {
            chunk += data;
        });
        req.on("end".() = > {
            resolve(JSON.parse(chunk));
        });
    });

http.createServer(async (req, res) => {
    console.log('receive request')
    console.log(req.url)
    if (req.method === 'POST' && req.url === '/') {
      const data = await resolvePost(req);
      const projectDir = path.resolve(`. /${data.repository.name}`)
     deleteFolderRecursive(projectDir)
     
      // Pull the latest code note!! Change the repository address from HTTPS to git
      execSync(Git clone ://github.com/${data.repository.name}.git ${projectDir}`, {stdio:'inherit',})// Copy the Dockerfile to the project directory
     fs.copyFileSync(path.resolve(`./Dockerfile`), path.resolve(projectDir,'./Dockerfile'))

     // Copy.dockerignore to the project directory
     fs.copyFileSync(path.resolve(__dirname,`./.dockerignore`), path.resolve(projectDir, './.dockerignore'))

      // Create docker image
     execSync(`docker build . -t ${data.repository.name}-image:latest `, {stdio:'inherit'.cwd: projectDir
   })

      // Destroy the docker container
      execSync(`docker ps -a -f "name=^${data.repository.name}-container" --format="{{.Names}}" | xargs -r docker stop | xargs -r docker rm`, {
       stdio: 'inherit',})// Create a docker container
     execSync(`docker run -d -p 8002:80 --name ${data.repository.name}-container  ${data.repository.name}-image:latest`, {
       stdio:'inherit',})console.log('deploy success')
    res.end('ok')
}
}).listen(3000.() = > {
    console.log('server is ready')})Copy the code

Copy it to the cloud server by SCP and execute it on vscode terminal.

scp ./index.js root@118.89244.45.:/root
Copy the code

Pagoda release port

If nginX is installed with a pagoda panel, be sure to release the port

Running the Node script

Run index.js as background script on cloud server through pM2 installed before (remember to turn off tools such as blue light at this time to avoid link failure)

pm2 start index.js
Copy the code

After the demo project is successfully started, access port 80 of the cloud server and view the demo project. (Before accessing the demo project, ensure that port 8888 is enabled on the server.)

Run log

Run pM2 logs on the cloud server to check the logs output by index.js, then add Hello Docker copy locally and push it to Github

Write in the back

The demo only create a single docker container, when project update, due to the container need to pass the process of destruction and creation, will exist for a period of time the page cannot be accessed And actual production tend to create more containers, and gradually update each container, with load balancing to map the user’s request to different port container, Ensure that online services do not go down due to container updates

There are also very mature CI/CD tools based on the Github platform, for example

travis-ci circleci

The YML configuration file simplifies the steps described above for registering webhooks and writing the index.js script to update the container

.travis.yml language: node_js node_js:

  • 8

branchs: only: – master cache: directories: – node_modules install:

  • yarn install

scripts:

  • yarn test
  • yarn build

Docker also launched docker-compose, a better way to manage multiple containers

Refer to the article

Docker + Webhook from zero front-end automation deployment