🐳Docker cross-platform migration + Nginx proxy + webhook automatic listening git push = one command to run automatic continuous deployment system

This article takes you through the process of building the system from scratch, for static sites or separated sites

For a non-static page, see 👉🏻 Docker🐳+Nginx+WebHook Automation continuous Deployment 2.0

Recorded the joy from 0 to 1 🌈, and stepped on the pit 😤, I hope everyone can round the past if you have other questions, please contact me, learn together progress ~ 🤝

This article uses Aliyun ESC server Ubuntu 16 64-bit build, if not to buy a bar, pretty cheap now just discount, Aliyun

If you are familiar with Docker and don’t need to know the concept, you can start by looking at 👉🏻 in action

Docker installation

Installation tutorial no problem, I will not elaborate on this side, according to knock, certainly can come out. 👉🏻 Installation tutorial 👉🏻 Image setting tutorial

# the Helloworld test
docker run hello-world
Copy the code

If the above command is executed, a screen of code appears with Hello from Docker! That means you succeeded

Docker concept

The image of mirror

Docker image is a special file system, in addition to providing programs, libraries, resources, configuration files required by the container runtime, but also contains some configuration parameters prepared for the runtime (such as anonymous volumes, environment variables, users, etc.). The image does not contain any dynamic data and its contents are not changed after the build

Common commands

# fetch mirror
docker pull XXX
# search mirror
docker search XXX
# delete mirror
docker rmi XXX
# create an image, usually with a Dockfile. # create an image, usually with a
docker build -t XXX .

# Dockerfile easy detailed check yourself
# fetch from that mirror
FROM nginx:latest
# Execute those commands
RUN echo '

Hello, Docker!

'
> /usr/share/nginx/html/index.html Copy the code

Summary: Mirroring is an application with an environment that encapsulates configuration information and execution instructions

The container vessel

The relationship between an Image and a Container is similar to that between a class and an instance in object-oriented programming. An Image is a static definition and a Container is an entity of the Image runtime. Containers can be created, started, stopped, deleted, paused, and so on.

Common commands
Use the first three digits of the command ID
Display all containers
docker ps -a
Display all container ids
docker ps -aq
Close/start/restart the container
docker stop/kill/start/restart ID
# delete container
docker rm ID
# delete all containers (enabled and unstarted)
docker rm $(docker ps -aq) 
Go to the command line behind the container and do what you want. Bash is recommended
docker exec -it ID /bin/bash
Background - p # - d mapping port is docker port, in front of the back of the port is mapped to the server
docker run -p 8000:80 -d XXX
Copy the code

Personal summary: When used as a new image, the image starts to work

The Docker Registry warehouse

After the image is built, it can be easily run on the current host. However, if the image needs to be used on other servers, we need a centralized service to store and distribute the image, and Docker Registry is such a service

Like the NPM repository, it stores images

Docker – compose the installation

If you try to create a new image -> container, you will probably find it very troublesome. If one of them is like this, how many of them can work together? Well, Docker-compose is designed to solve that problem.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you can use YML files to configure all the services your application needs. Then, with a single command, all services can be created and started from the YML file configuration.

# Come test the power
# docker-compose.yml
version: '3.1' services:
  mongo:
    image: mongo
    restart: always
    ports:
      - 27017: 27017
  mongo-express:
    image: mongo-express
    restart: always
    ports:
      - 8000: 8081
Copy the code

You only need one line of command to start the configured container

# start
docker-compose up
Copy the code

Nginx image installation

Here only Docker is used to pull the Nginx image

# If you've ever installed a server before, you know how painful it can be
# This is the power of Docker, just one line
docker pull nginx
Copy the code

The real complexity of Nginx is not in installation, but in the fact that its configuration files are really smelly and long, and very unfriendly to small people! Fortunately, this article does not use much ~

WebHook configuration

If you haven’t heard of Webhook, you’ve probably thought about it. It would be great if you could automatically pull new code and restart the service every time you do a Git push on Github. Webhook is the key to doing this

A webhook is, as the name suggests, a network hook. It’s easy to think of Vue Life hooks and EventEmitter, both of which are publish-subscribe. Webhook does the same thing. Github operations, from push to branch to fork and even star, have hooks. Here is the webhook configuration page:

Steps:

  1. Select any item
  2. choosesetting
  3. Choose webhook
  4. New webhook
  5. Fill in the URL password (these two, the actual combat to use, need to remember!
  6. Choose to accept the data format
  7. update webhook

In actual combat

⚓ ️

Well, after all this talk, it’s time to get real! Ollie give!

Start by adding the following two files to your project

Webhook use

# autoDeploy.sh
#! /bin/bash
# deploy-dev.sh
echo Deploy Project Get the latest version of the code

# pull code
git pull

Docker-compose is composed for docker-compose and Nginx
# force a recompile of the container
# docker-compose down
# docker-compose up -d --force-recreate --build
Copy the code
const http = require('http')
const createHandler = require('github-webhook-handler') // Install NPM I github-webhook-handler -d before using this plugin

function run_cmd(cmd, args, callback) {
  var spawn = require('child_process').spawn;
  var child = spawn(cmd, args);
  var resp = "";
  child.stdout.on('data'.function (buffer) {
    resp += buffer.toString();
  });
  child.stdout.on('end'.function () {
    console.log('the Deploy complete')
    callback(resp)
  });
}

const handler = createHandler({
    path:'/resume-hook'./ / url suffix
    secret:'xxxxxxx' // Your password
})

// ❗️ Note Add an open port to the ali cloud security group
// ❗️ also close the corresponding port on ubuntu firewall
http.createServer((req,res) = > {
  handler(req,res,err => {
    res.statusCode = 404
    res.end('no such location')
  })
}).listen(7778, () = > {console.log('Webhook listen at 7778')
})

handler.on('error',err => {
  console.error('Error',err.message)
})

// Intercept push and execute the Deploy script
handler.on('push'.function (event) {
  console.log('Received a push event for %s to %s', event.payload.repository.name, event.payload.ref);
  // Branch judgment
  if(event.payload.ref === 'refs/heads/master') {console.log('deploy master.. ')
    run_cmd('sh'['./autoDeploy.sh'].function(text){ console.log(text) }); }})Copy the code

❗ don’t forget to open ports!! Be this problem pit cried all, the lesson of blood 😭

Package your project

I used webpack to pack the dist folder. This folder will be used in the next step, copied into the Nginx container to display the page. Don’t forget to change the folder name when configuring the docker-comedy.yml file! And to push this folder to Github, see if gitignore has it, delete it

Nginx and Docker-compose configurations

Add nginx/conf/docker.conf to your project as follows:

Nginx configuration format requirements are very strict, it is recommended to directly copy and paste

Port # 80 belongs to the Nginx container and will be mapped laterserver { listen 80; location / { root /var/www/html; index index.html index.htm; }}Copy the code

Add the docker-comemage. yml file

Docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose
# Folder stored after packing, my name is dist!! Don't forget to make it your own
version: '2'
services:
  nginx:
    restart: always
    image: nginx
    ports:
      - 80: 80
    volumes:
      - ./nginx/conf/:/etc/nginx/conf.d
      - ./dist/:/var/www/html/
Copy the code

When you’re done, uncomment the end of autodeploy. sh. git push

As a final step, execute webhook.js

  • Go to the server and git clone XXX
  • cd xxx
  • Nodewebhook. js (pM2 daemon is recommended if you don’t want to see logs)

Welcome to the discussion. I’ve been working on the whole process all day. It’s too much. It’s nice to finally succeed. However, my BOLG cannot use this method because blogs use Hexo and need hexo Server to start. If you have any suggestions or questions, please feel free to comment 👏👏!