CI/CD is short for Continuous Integration /Continuous Deploy, which translates to Continuous integration /Continuous deployment. CD can also be interpreted as Continuous Delivery, but for software engineers, the most direct contact is with Continuous deployment.
When I first started working, I was exposed to the concept of CI when the team QA (Quality assurance) used Hudson to do quality scans on projects and run some basic automated tests. The most memorable moment was when QA said to me, “Your code is static alarm, change it… “.
Now that I think about it, I wonder, “Huh? Weren’t we using ESLint? I don’t remember… “So I looked at the ESLint update history, and I realized that at that time, the major version number of ESLint was only 3, and the VSCode ESLint plugin was a relatively early version, probably not yet popular.
I slowly heard Jenkins, Travis CI, and so on, but I couldn’t use any of them because they were so delicious.
And I found that I was not interested in CI/CD. Why? Because I don’t have the motivation to use it yet.
Build/deploy those things
The simplest way to build/deploy is to package a project using a tool like WebPack or Gulp, and then place the package in a Web container that hosts static resources on the server. Java can be in Tomcat, but Nginx is popular. With the Web container, the files packaged in the front end (such as index.html, main.js, etc.) can be accessed, as you all know.
In ’16 or’ 18, I wasn’t in charge of packaging and deploying these things (also because the front end didn’t have access to the server, EMMM…) “, so I didn’t pay any attention to packaging and deployment.
In ’18 and’ 19, I was in charge of packaging and deployment. No experience in this area at that time, Linux commands are relying on baidu while knocking. However, I clearly remember seeing xshell and XFTP in the test group’s office. After getting these two tools together, I realized that the deployment was really easy. I just ran a script, quietly waited for the webpack and Gulp workflow to finish, and then sent the files to the server via XFTP. Just be careful not to make mistakes (obviously, human error is also a concern). Due to the low frequency of build and deployment and the low number of projects, I was able to manage this year.
By last year, I was working on about five projects, close to 10 front-end projects. At this daily deployment pace, I don’t think Xshell + XFTP will save me either. Although these projects don’t release every day, the test environment is often released, and the daily deployment is annoying to me. Writing code is often interrupted and a waste of time.
I was thinking about looking for a change, but I still didn’t think about CI/CD because I felt like I still didn’t understand CI/CD. So I thought about using shell scripts to do the build/deploy thing first, so here are two exploratory articles:
- One small step for automated deployment, one big step for the front end
- In-depth practices for front-end automation deployment
With this wave of scripting exploration, I was basically transitioning to semi-automation, and this anxiety was basically alleviated. However, I found that my computer still couldn’t hold up, and the sound of the fan spinning rapidly could make me shut down… You know the heat of running a local development environment while running one or two build/deployment scripts for a project, plus other software running on your computer.
So, the build/deploy job shouldn’t be on my computer, it’s too tiring.
Also, I don’t want to manually trigger the deployment script, it’s too tiring, it’s time to let the code learn to deploy itself. It was at this time that I began to appeal to CI/CD.
Since our code is hosted on a self-built GitLab server, I directly chose to use gitLab’s own CI/CD capability for CI/CD. After work, I spent almost two days familiarizing myself with the gitLab CI/CD documentation.
Then I set up the environment according to the documents and debug the.gitlab-ci.yml configuration file over and over again. I remember I failed about 11 times before I successfully ran a Pipeline for the first time. It was a torture process, sometimes you just don’t know where the mismatch is.
But once you get the process right, the whole trial-and-error process is worth it. Nice!
What did CI/CD do?
In fact, as I mentioned earlier, the process of a release is mainly divided into the following steps:
- Code merge: The test environment or production environment has separate branches, and when all the code for the release is merged into the corresponding branch, release can be considered.
- packaging: or build. In the case of production deployment, once we have cut to the production branch and pulled the latest code, we can begin the packaging step. This is done mostly with bundler, such as WebPack. Packaging commands, on the other hand, are usually defined in
package.json
thescripts
Yes, the command I’m defining here isbuild:prod
So just runnpm run build:prod
Will do. - Deployment: Putting the packaged file in a Web container, which is usually on a Linux server, involves transferring the file remotely, usually using a shell script or XFTP.
What CI/CD does is take over the process with automation.
Monitor the Mutation
My appeal: When the code is merged into a branch, GitLab can automate the packaging and deployment steps for me.
So, the ability to monitor code changes is a priority. It does, and if you’ve ever followed git hooks, you know it’s possible.
In addition, most code hosting platforms provide Webhooks to monitor many events, such as push and merge.
That said, developers have the ability to implement their own CI/CD mechanisms without using the CI/CD capabilities provided by the code hosting platform.
Ps: Of course, besides CI/CD, SMS/email notification is also possible, if you dare to try it, we can do a lot of things based on the open capabilities of the platform. We don’t want to do the CI/CD research by ourselves. The wheel built by others has already turned six times, so we can use it directly.
Back to the theme, whenever I monitor code changes, the server automatically executes the build/deploy script.
How does Gitlab CI/CD work
Software for life, but also from life. Gitlab CI/CD designed many concepts, among which I found the most interesting: Pipeline and Runner.
Pipeline
Each CI/CD task that conforms to the. Gitlab-ci.yml trigger rule will produce a Pipeline. The concept is a little bit like the workshop line in a factory, where we know that there are many lines in the workshop, and different lines may handle the same type of production task, or different types of production task. When an assembly line is idle, it may be used to schedule other production tasks. Although Gitlab Pipeline does not have the concept of idle, a Pipeline will not be reused after execution, but the resources will be released to other pipelines, so it is similar to the workshop line.
Runner
With the assembly line, there must be hard workers to carry out production operations. Runner played the role of workers in Gitlab Pipeline and carried out operations according to the instructions issued by us.
The type of Runner
In Gitlab, there are many types of runners, including Shared Runner, Group Runner, and Specific Runner.
- Shared Runner can be understood as a mobile person, who may work in various assembly lines of the factory to provide support at any time! Shared Runner can serve projects across the Gitlab application.
- Group Runner is easier to understand, he only works in this Group, he will not work in other groups. In Gitlab, we can set up different groups, such as one Group in the front end, one Group in the back end, and even N groups in the front end. Therefore, Group Runner only serves the specified Group.
- The Specific Runner will only work on the specified Project. The Specific Runner will not work on any other Project.
Registered Runner
Workers are required to hold a certificate to work. Similarly, Runner has a registration process, which is equivalent to the meaning of registration in the factory. See Registering Runners for details. Only registered Runners are eligible to execute pipelines. But it looks like Gitlab didn’t pay Runner!
. Gitlab – ci. Yml configuration
After the assembly line and the workers have been arranged, the rules and regulations of workshop production must be formulated. How does an assembly line work, there must be a rule, don’t you think?
That’s right, the.gitlab-ci.yml file is for making the rules! In fact, the CI/CD process I requested is not complicated, just help me to build and deploy the two steps. Take a simplified production environment build deployment process as an example:
workflow:
rules:
- if: '$CI_COMMIT_REF_NAME == "master"'
stages:
- build
- deploy
build_prod:
stage: build
cache:
key: build_prod
paths:
- node_modules/
script:
- yarn install
- yarn build:prod
artifacts:
paths:
- dist
deploy_prod:
stage: deploy
script:
- scp -r $CI_PROJECT_DIR username@host:/usr/share/nginx/html
Copy the code
First, I want to do the build/deploy job only in the Master branch, which can be done using the if condition under Workflow.rules.
Then, I want to execute the entire process in two phases. The first phase is build, which is used to perform the build task; The second phase, deploy, is used to perform the deployment task. This can be defined using stages.
Next, I define two jobs. The first job is build_prod, which belongs to the build phase. The second job is Deploy_prod and belongs to the deploy phase.
In job buiild_prod, the yarn install and Yarn build:prod scripts are run. The generated file assets are saved for future jobs based on the artifacts configuration.
In the deploy_build job, you transfer files to the nginx directory on the Linux server using the SCP command.
This simple Pipeline configuration example uses the Basic Pipeline Architecture, but only one job is defined for each stage.
Gitlab CI/CD Variables
Gitlab provides more configuration capabilities for CI/CD through Variables so that we can quickly get some key information to use to make process decisions. $CI_COMMIT_REF_NAME and $CI_PROJECT_DIR in the above example are predefined variables for Gitlab.
In addition to pre-defined variables, we can also define some environment variables, such as server IP, user name, and so on. This avoids the risk of listing private information in plain text in the configuration file. On the other hand, it is convenient to adjust the configuration quickly in the later period, avoiding direct modification.
Credit problems
To transfer files between different hosts through SCP, you need to establish a trust relationship. In CI/CD, it is best to choose the non-secret mode. The basic principle is to hand over the SSH public key to each other. This is something I mentioned in this article, one small step in automation deployment, one big step in front of the brick, so I won’t go into it here.
Runner independent deployment
Since I directly deployed Runner to Gitlab code server, and the configuration of the code server we configured is not high, it is a little difficult to build and deploy Pipeline with high CPU usage. Sometimes the Pipeline runs and even crashes the Gitlab Web service.
My teammates asked me: “Why can’t Gitlab white screen open? “
Before long, the leader gave me a Linux server, specially for the front end to do daily work. Bingo, I conveniently deployed Runner to the new machine independently, so as not to affect the teammates, and each release time was directly shortened from 8min to less than 2min, which was really Nice!
Revenue from CI/CD
Intuitively, most of my repetitive work is removed, and I can use the extra time to do more meaningful things, or doesn’t it smell good? And, every day without manual hair version, the mood is also great!
In addition, because CI/CD is automated, there are few errors when the script is written correctly, and the chance of production accidents is greatly reduced.
summary
Based on some of my own experiences, this article recalls the pain points I encountered during the build/deployment process, and describes how I used CI/CD to solve these pain points around a basic Gitlab CI/CD case. Although the main character of this article is Gitlab CI/CD, it is similar to CI/CD of other code hosting platforms in thinking, and it is not difficult to master one by analogy. And with tools like Pipeline, we can do more things like continuous integration + automated testing. This will test your imagination, and the clever reader will do the rest.
If you think this article is good, welcome to point a like, add a concern (front-end Si Nan), sincerely thank you for your support. Also welcome to communicate with me directly, I am Tusi, looking forward to making progress with you!