Said in the previous

Because and friends take a site as a technology platform and personal blog, so as a front-end graduated two years, want to complete the front – end – after operations, and so on a set of site process can be said to be a long way to go, but have goals to have power, so when I undertook the project starts when the content of the operations of this crazy to study, Finally after dozens of videos and countless baidu finally built a more stable front and back end separation of automatic deployment project!

Because front-end automation is relatively simple, I will first use front-end automation examples to explain a little bit, and finally talk about docker+ Django + UWSGi + Nginx backend automation deployment is how to achieve.

Suggestions with video commentary to help quickly understand!

The knowledge points used in this article are:

  • gitlab
  • gitlab-cicd
  • gitlab-runner
  • docker
  • vue
  • django
  • nginx
  • uwsgi

Don’t worry if there are some things you don’t know and haven’t heard, this article will cover them all!

Overall project framework

Still don’t understand the picture? It doesn’t matter! As much as you can see, let’s move on.

The initial idea is that the code library is the official library of GitLab. The normal code management is the same as Git. The front end is to create a container with Docker on the server, which uses Nginx as the proxy to run the front-end project, and the back end is to create a container with Docker, nginx as the proxy, and the static files go to Nginx. Dynamic requests are brokered to uWSGI and uWSGI handles dynamic requests.

Then I want to achieve the local code upload, the corresponding server project automatically updated, automatic deployment.

docker

Since my deployment to the server is based on Docker, here is a brief introduction to Docker.

To put it simply, Docker is something that can create virtual machines, but the virtual machines it creates are not complete, or build a complete environment with the least performance. For example, if you want to build a Node environment, Docker run-ti –name=my-node node command to enter a new Node environment, which is isolated from your current host, which is equivalent to running a Node only virtual machine.

Docker has three concepts, one is called image, one is called container and one is Dockerfile. How do we understand it? Let me give you an example:

We know that winbows installation file is called a mirror, we installed the image on a computer, then we can through Windows operating the computer, then we boot into the system is equivalent to a container, so come to the conclusion that the mirror is not to change, only after the installation of the container, we can operate in it.

Docker run-ti –name=my-node docker run-ti –name=my-node

Docker Run is the equivalent of installing Windows

– TI is equivalent to starting up and entering the Windows desktop

–name=my-node =my-node

Node is the equivalent of a Windows installation file, where Docker gets the image of Node from the DockerHub library

So we have carried out some operations on widows, such as installing several software. Now I want to repackage them as an image for others to install, so the Windows installed by others will have my software. In order to achieve this in docker, we need to write a file Dockerfile, which will be based on an image, perform some operations, and finally use docker build package into a new image, and then others can use Docker run to execute the image, forming a new container.

For a brief understanding, IF I want to know more about Docker, I also have a document here for reference.

Gitlab code base

I am using the official gITlab library: gitlab.com/

In fact, GitLab provides a local code base, but the performance required to run it on the server is relatively high, which seems to be at least 2 core 4G. However, the two servers I bought are both 1 core 2G, so there is no local code base deployed.

Because this is done exactly as git does, the native code pull/push into the GitLab repository is not covered here.

gitlab-cicd

What is CI/CD?

Let’s look at the concept first:

  • CI: Continuous Integration

    The process of automatically detecting, pulling, building, and (in most cases) unit testing after source code changes.

  • CD: Continuous Delivery

    Continuous delivery (CD) usually refers to an entire chain of processes (pipes) that automatically monitors source code changes and runs them through build, test, package, and related operations to produce deployable versions, essentially without any human intervention.

Gitlab CI/CD

After looking at the concept, let’s look at gitLab’s CI/CD implementation in detail.

The figure above is the secondary menu of CI/CD in the left navigation of GitLab. Here we mainly look at two:

  • Pipelines

    Pipeline is a task group that is triggered every time we make a code change or whatever we set up. There may be multiple tasks in this task group, each of which does different things, such as installing the environment, packaging, deploying, etc.

  • Jobs

    As the name implies, Jobs here are the tasks mentioned above. An assembly line can have multiple tasks, which all depend on our needs.

Create CI/CD pipelining

Now we know that every code change will trigger Pipelines defined by ourselves in GitLab, and there will be one or more tasks in the Pipelines.

So how do you create a pipeline?

create

First, we need to create a file called.gitlab-ci.yml in the project root directory, as shown below.

write

I’m just going to put a gitlab-ci.yml that I’ve already written, and I’m going to read it.

Docker node: Alpine docker node: Alpine docker node: Alpine
All the tasks below are performed in a Virtual machine with node environment
The default directory for this virtual machine is in our current project
image: node:alpine
# Here are the stages where we customized some pipelining
stages:
  - install
  - build
  - deploy
# Because all new files generated by each operation in each task are cleared before proceeding to the next task
Some paths defined in the cache will be cached for the next task
cache:
  key: modules-cache
  paths:
    - node_modules
    - dist
# here is our first task, its name is job_install, this name can be written randomly, also can use Chinese
job_install:
  stage: install # this represents that our current task is in the Install phase
  tags:
    - vue3 Here is the tag of the current task, which we defined later in Gitlab-runner
  script: Every task must have a script, which is a statement to execute
    - npm install NPM install will be installed in the current project of the vm
# This is our second task, the logic is the same as the first task above, I will not do the details
job_build:
  stage: build
  tags:
    - vue3
  script:
    - npm run build
# This is our third task, since the packaging of the project running here is complete, we are going to create a new container deployment project with Docker
job_deploy:
  stage: deploy
  image: docker Because we are using the docker directive, we need to switch the Node environment to docker
  tags:
    - vue3
  script:
    Create a new image from the Dockerfile in the root directory of our project
    Package as an installation package
    - docker build -t rainbow-admin . 
    Check whether the current server is running or there is a project container we have run before, if there is deleted
    - if [ $(docker ps -aq --filter name=rainbow-admin-main) ]; then docker rm -f rainbow-admin-main; fi
    Here is running the new image we just created
    We have just packaged the installation package, after the installation will run your project inside nginx, external network can access your project
    - docker run -d -p 80: 80 --name=rainbow-admin-main rainbow-admin
    

Copy the code

Dockerfile is the file that creates the image in docker. It’s a very simple file. Let’s take a look at it here.

Our base image is nginx
All the tasks below are performed in a virtual machine with an Nginx environment
FROM nginx
The default access directory for # nginx is /usr/share/nginx/html
So we just need to copy the packaged dist into the corresponding directory
COPY dist /usr/share/nginx/html
Copy the code

Ok, speaking of which, we are looking at the overall frame diagram of the previous project. There should be a Gitlab-Runner who has not been mentioned, so let’s continue!

In fact. Gitlab-ci. yml file writing rules and some other ci /CD operation mode, I have expanded in this document.

gitlab-runner

We now know that every code change we make in GitLab triggers a CI/CD pipeline, so where does this pipeline take place?

Then we need to mention gitlab-runner, which is a process executing on the target server. It is associated with the corresponding project through token. As long as the pipeline in the project is triggered, this side will receive it and then execute the deployment task on the current server.

How to create gitlab-Runner

First of all, I used Gitlab-runner in docker environment this time. Of course, there are many other ways to create runner, which I will not repeat here.

Then we need a server with docker environment, if you ask me how to install Docker on centos server, I happen to have a document you can refer to, just follow the above commands one by one.

Now that the Docker environment is available, we are officially deploying Gitlab-Runner.

The first step is to pull and run a Gitlab-Runner image.

docker run -d --name gitlab-runner --restart always \
     -v /srv/gitlab-runner/config:/etc/gitlab-runner \
     -v /var/run/docker.sock:/var/run/docker.sock \
     gitlab/gitlab-runner:latest
Copy the code

After the installation is complete, we can use the docker ps command to check whether there is something as shown below.

If so, congratulations, you can continue, if not, then unfortunately, I don’t know why, but you can baidu.

Now that we’ve just installed a Gitlab-Runner, it’s not yet associated with our code base, so let’s go ahead and run the following command.

docker exec -it gitlab-runner gitlab-runner register
Copy the code

After executing this command, we will enter several things (not in the right order, watch for the prompts) :

  1. The first one is the domain name, which is the domain name of your code base. If you don’t know, look at the following figure
  2. The second is your project’s token, which is located as shown in the figure below
  3. The third is a description of the current runner (just write it)
  4. The fourth is to add a tag to the current runner, which will be used in the.gitlab-ci.yml file.
  5. The fifth is the environment in which the runner runs, which is what we put heredocker
  6. I’m going to specify a base environment because I’m using Docker, and I’m going to specify one because it’s the front endnode:14

After successful registration, the following picture is shown

At this time we go to gitLab to set the cicD token page refresh, to see if there is a runner.

Ok, so now we have completed the whole process of front-end automation deployment!

At this point we can modify the code upload to see if the pipeline can be triggered, if the server can complete the pipeline task, and if we can see the foreground page when we visit the server.

Q&A

Of course, don’t worry if it doesn’t work out. I tried to deploy it over a hundred times before it was completely successful.

Eliminate assembly line email reminders

This email reminder can sometimes be really annoying. If you don’t want it, you can cancel it in the personal center in the upper right corner. The Settings are shown below.

First Creation.gitlab-ci.ymlAfter triggering the pipeline, there is no permission problem

This is a weird problem. It took a long time to find the problem on extranet because shared tags were enabled by default and required authentication.

Here I choose to close directly, which is also the page for obtaining the token.

Django environment setup

To set up the Django environment, we need a Python environment. Depending on the project, there may also be a mysql environment, a Nginx environment, etc.

I’m using python3.8.8, nginx, uwsgi.

Unlike the front end, there are many environments on the back end, so we will first build a Docker image that can be used exclusively for our project.

This document explains why the nginx+ UWSGi + Django layout is used.

Create a docker image of the runtime environment

Here I directly put my Dockerfile posted to explain my ideas.

Here is my basic image, which indicates that I created the environment in centos
FROM centos
# Here is the author information, can be ignored
MAINTAINER "JyQAQ"
# Here is the context for installing Python, as you can see, there are many, the last one here I also installed nginx incidentally
RUN yum install -y zlib-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gcc make wget libffi-devel mysql-devel nginx
Download the python version I want from the python website here
RUNWget HTTP: / / https://www.python.org/ftp/python/3.8.8/Python-3.8.8.tgz
Unzip to /usr/local/src/
RUNTar -xzvf. / python-3.8.8.tgz -c /usr/local/src/
# here I created a project directory, which is where I want to put the project files
RUN mkdir -p app/django_platform
The following commands are executed in this directory
WORKDIR /usr/local/ SRC/Python - 3.8.8
Check the installation package
RUN ./configure --prefix=/usr/local/python3
# install
RUN make && make install
Set python's environment variables
ENV PATH /usr/local/python3/bin:$PATH
Reset the working directory where the centos installation nginx configuration file is located
WORKDIR /etc/nginx
# override default nginx with my own nginx configuration
COPY django_platform.conf nginx.conf
Set the working directory to the root directory of our project
WORKDIR /opt/app/django_platform
# Since we only need to configure the environment, we do not need to transfer the project file, so we comment out the following two sentences
#COPY . .
#RUN pip3 install -r requirements.txt
# mount project directory, do not need to read it
VOLUME /opt/app/django_platform
Expose port 80 because nginx uses this port
EXPOSE 80
The bash command line is started by default
CMD "/bin/bash"
Copy the code

The above Dockerfile uses an nginx configuration django_platform.conf, as shown below.

I won’t go into the configuration of Nginx here, but I’ll just mention a few changes I made.

user nginx; worker_processes auto; error_log /opt/app/django_platform/nginx-error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; include /etc/nginx/conf.d/*.conf; server { listen 80; server_name django.rainbowinpaper.com; root /usr/share/nginx/html; {# uwsgi location / {# uwsgi will use a configuration include uwsgi_params; uwsgi_connect_timeout 30; # connect to uWSgi uWSgi_pass Unix :/opt/app/django_platform/uwsgi.sock; } # static/ {# static/ alias /opt/app/django_platform/static_all/; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } }Copy the code

So now let’s run docker build-t jyqaq/ Rainbow-Django. I packaged it into a new image named Jyqaq/Rainbow-Django and pushed it to DockerHub so THAT I could use this image as the base environment directly FROM Jyqaq/Rainbow-Django in Dockerfile.

Djangos.gitlab-ci.yml file

The environment is set up, so the rest is very simple, no bb, directly to the code.

The base environment is Docker
image: docker

stages:
  - install
  - clear
This deployment is the same as the front-end
Deployment environment:
  stage: install
  tags:
    - django
  script:
    - docker build -t django_platform .
    - if [ $(docker ps -aq --filter name=django_rainbow) ]; then docker rm -f django_rainbow; fi
    - docker run -d -p 80: 80 --name=django_rainbow django_platform
If the pipeline fails, it will generate useless images. If the pipeline fails, it will generate useless images
Clean up the docker:
  stage: clear
  tags:
    - django
  script:
    - docker ps -a|grep "Exited" | awk '{print $1}' | xargs docker stop
    - docker ps -a|grep "Exited" | awk '{print $1}' | xargs docker rm
    - docker images|grep none|awk '{print $3}'|xargs docker rmi

Copy the code

Pay attention to the point

In particular, the above base environment is in docker, that is to say, now we are running docker inside docker. Of course we don’t want this, we want the effect of the server docker to create the container, so we need to modify the gitlab-runner configuration file, review the previous we create gitlab-runner command:

docker run -d --name gitlab-runner --restart always \
     -v /srv/gitlab-runner/config:/etc/gitlab-runner \
     -v /var/run/docker.sock:/var/run/docker.sock \
     gitlab/gitlab-runner:latest
Copy the code

/ SRV /gitlab-runner/config = / SRV /gitlab-runner/config = / SRV /gitlab-runner/config

[runners.docker]
        Find the volumes configured with runners. Docker and fill in the next two directories
	volumes = ["/cache"."/usr/bin/docker:/usr/bin/docker"."/var/run/docker.sock:/var/run/docker.sock"]
Copy the code

Dockerfile is deployed here.

FROM jyqaq/rainbow-django

MAINTAINER "JyQAQ"
Copy the project file to the container
COPY.
# Download django dependencies
RUN pip3 install -r requirements.txt
Nginx -g "daemon off;" The command keeps the container running in the background
ENTRYPOINT uwsgi --ini /opt/app/django_platform/uwsgi.ini && nginx -g "daemon off;"
Copy the code

Dockerfile,.gitlab-ci.yml, and django_platform.conf are all stored in the root directory of the project.

Bingo, so far we have completed the automated deployment of the front and back ends!