preface

Originally wanted to do in one article, but because of the space limitation here, and involves typesetting and reading, had to put in more than one article please understand, the whole article more than 20,000 words, 80,000 characters, more than 60 pictures, part of which is the long picture of the combination of multiple pictures, the beginning and end of the article have the corresponding before and after the article navigation

It is recommended to read the introduction first to know the general content and process

A dragon! CI/CD creates an engineering service environment for a small front-end team

start

In this era of big front-end in full swing, vue, React, Angular, Flutter, electron, small program and other technical frameworks contend with each other. These are the weapons used by our front-end to fight, and the front-end engineering system is gradually spreading to small companies and teams. As a leader of a small front-end team, it is necessary to know how to build a basic front-end engineering system service environment to help small teams improve development experience and work efficiency.

The front-end engineering system mainly includes CI/CD, NPM private library, API Mocker Server and so on

Come on, boss, let’s take a quick look at the engineering service we’re trying to accomplish

Introduction to figure

There aren’t a lot of things

Local development: Writing esLint, STYLint, and prettier rules with VScode for automatic formatting, and adding a Git hook to check if you add rules in the middle of a project Team development may go down 😂)

When we do a push on a particular branch or perform a branch merge update in the repository, our CI/CD service is triggered: To an automated build our code (of course you can add some unit testing, code quality detection, etc., single page tutorial without ha), automatic help us when building a successful deployment to the corresponding development environment, test environment, pre-release environment, deployment of online production environment is generally set manually update, etc, of course also need to rapidly rollback function, CI/CD service running status should be notified at last, through: email, nail enterprise wechat, etc., to tell the team or related developers the result of construction, etc

Knowledge map

CI/CD pipeline

CI stands for Continuous Integration, CD for Continuous Delivery and Continuous Deployment. You can also think of them as processes similar to the software development life cycle

In simple terms, it is necessary to change the code locally — >git push to code hosting — > automatically install dependencies, package test deployment, etc. (development environment is generally automatic, production environment is generally required to manually click the button online)

More is no use looking at pictures

CI/CD assembly line working diagram

To be honest: the test phase has not been done for the time being hahaha, but there will be unit tests and so on for the widget library

The necessity of the automated process: avoid many problems, solving conflicts, less assuming your front team has 6 people, everyone’s computer system, the node version, version of the NPM is likely to be different, even a uniform, everyone pull code source warehouse build out of the product may also be different, that will create several problems

  • Time consuming: Everyone needs to build
  • Conflict prone: When Git is doing pull,merge,push, etc., there will be all kinds of unnecessary conflicts to solve, which is very time-consuming and unpleasant
  • Cache invalidation: The hash of a built static file is difficult to control and is easy to lose cache on the client side, even if contenthash and other optimizations are set in Webpack. For example: A large file, such as commit. Js, base. Js,echat.js vendor.js, has not been updated, and has been cached in the client. However, after the update, the hash value of the file has changed, and the client user needs to load them again

In fact, these problems in large companies, staff perfect team generally do not encounter, because there will be a person responsible for the construction of this side, but you know, the initial small team, the kind of bitter only experienced to know 🤣, but if you will, and a little say, then this problem can be easily solved 😎

To build automated build and deploy CI /CD pipelining services we use Jenkins here

Jenkins

Github’s Github Actions, GitLab’s Gitlab-Ci, Gitee’s Gitee Go, Travis, Netlify, etc. We use Jenkins to quote:

Jenkins is the leader in open source CI&CD software, providing over 1,000 plug-ins to support build, deployment, automation, and any project needs.

To facilitate deployment, migration, etc., we will install Jenkins on Docker

Docker

Docker is an open source, lightweight application container engine. In the past, when we wanted to test the Linux operating system on Windows, we used to install a virtual machine, install the operating system on the virtual machine, install the application and so on, and if it crashed, we could restart it and so on, without impacting the host system. However, virtual machines are heavy and slow to start. Now we use Docker. Compared with installing the operating system on virtual machines, Docker uses containers to carry applications, which is lightweight, efficient, convenient and quick to deploy

Docker consists of Image, Container and Repository.

Mirror Image (Image)

Image is equivalent to the installation operation of the system disk, USB disk, etc., which can contain node, Gitlab, etc., of course, can also contain the full centos system, even centos + Jenkins or centos + Verdaccio (NPM private library).

The Container (the Container)

A Container is a small, isolated virtual machine where your image is installed

Warehouse (Repository)

Repository is used to store images, which is similar to Git Repository. Docker Hub is a public Repository, but our network speed is slow, and the source is generally set as Taobao source

Later, we will use several Docker containers to install Jenkins, Verdaccio, YAPI, etc., respectively, to form a small stand-alone microservice. Now more or less know why docker is used, right

Verdaccio

Verdaccio is a lightweight private NPM Proxy Registry created by Node.js that can be used by small teams

Yapi

Yapi is an API management platform. When the front and back ends are separated, API mock is particularly important. Without API and mock, friends in the team have to do a bunch of test data in the code to judge the conditions and so on. At present, the main mock systems include RAP2, Eolinker, yAPI and so on. We use YAPI here, which is lightweight and simple with good functions, active community and high STAR, and requires mongoDB to store data

MongoDB

MongoDB is a non-relational database written in C++ language. Features: high performance, easy deployment, easy use, and convenient data storage. Here we install for yAPI

Git workflow

Let’s talk about git flow for our small team, so that we can understand what CI/CD will do for us

The environment in which the code runs

In general, small companies and small teams will have at least one of these environments:

  • Local development environment:

Developers can test their own locally deployed static servers, or they can run an NPM Server-like environment that can run any branch of code

  • Dev development environment:

This environment is deployed with the code produced by the development branch dev, and is uniquely common

  • Test & Pre-release environment:

This environment is deployed with the code produced by the development branch Release, which is uniquely common

  • Online production environment:

This environment is deployed with the code produced by the development branch master, which is unique and common

Git branch model

Take a look at the role functions of branches

Branching strategy

Dev is a common development branch and will not be merged into other branches

All branches are based on master branch check-out

  • Master: Protection branch, corresponding to the production environment branch
  • Release: Protected branches. All completed branches are applied to be merged into the Release branch and made available to testers for testing
  • Feature -* : feature branch, specific feature development
  • Dev: Dev branch & Dirty branch, which corresponds to the common development environment. The above code is deployed to a common development environment for developers to test themselves and handle routine and non-routine debugging
  • Hotfix – * : Bug emergency fix branch can be merged directly into master. (If release merges several feature branches, buF can be merged directly into Master after emergency fix test is completed. If release merges into Master, buF can be merged directly into Master. After release is merged into master, features that are being tested or not ready to go live will follow.

The workflow is as follows:

  1. After receiving the requirement document, do the review and assign each person or group to develop the function, relevant personnel check out the function branch from the master
  2. In addition to testing locally during development, it will also merge into the dev branch if necessary, and do its own testing in the public development environment
  3. Since hotfixes may merge into the master during feature development, it is common practice to merge code into the master to prevent conflicts, etc
  4. After the completion of self-test, apply for merging into release, and notify the tester to test after the successful merging and deploying to the test environment
  5. After passing the test, release application was merged into master and ready to go online
  6. If the test fails, re-merge after the functional branch modification
  7. Delete the corresponding functional branch after it is successfully launched, and dev merges the latest master branch

CI/CD work

When our local branch is merged into dev and pushed to a remote Git repository, the task is triggered to automatically deploy and publish to dev

When we applied for the feature branch in Gitee and merged it into release, the task would be triggered to automatically build->build and automatically deploy and release to release test environment

When we apply for the release branch or &hotfix branch to merge into master, the task will be triggered automatically after the build->build is successfully deployed and released to the official production environment by manual deployment (production environment generally requires an additional step to manually release, click a button to release, etc.).

Create a warehouse

The above environment is clear, let’s build a new warehouse according to the above

You can also pull directly to my initialization warehouse prepared for you here gitee.com/eric-gm/ci-…

Installation preparation

System requirements

Liunx system, I use: Centos8, because docker is to be installed, the kernel version is required to be no less than 3.10. Here I use two Aliyun servers, one is used to deploy Jenkins, Verdaccio, YAPI, etc., as the environment deployment machine, and the other is used to install Nginx as the server for static resources. The environment deployment machine is recommended to start with 2-core 4G. If there is installation of Gitlab, it can be 8G to start the boss, can get a 16G of course is also very good ha ha ha

Of course, no matter the Intranet server or cloud server, SSH connection can be configured to easily connect to the server locally

Create a user

Create user in centos system, other system boss encountered problems baidu bar

  1. Take adding a user named longming as an example, enter commands to add a user, add a user directory, and specify bash as a shell
useradd -m -s /bin/bash eric
Copy the code

-m automatically creates the user’s home directory and copies the files in /etc/skel to the home directory

-s Specifies the shell used by the user after login

  1. Then set a password for the user. After entering a command, you will be prompted to enter the password twice
passwd eric
Copy the code
  1. View the current user list
cat /etc/passwd
Copy the code
  1. For ease of operation, we will now add root privileges to use Eric
#Enable write permission on sudoers files. Default is read-only
chmod -v u+w /etc/sudoers
#Modify the sudoers
vi /etc/sudoers
   -----------------------------------------------
   # Allow root to run any commands anywhereRoot ALL = (ALL) ALL Eric ALL = (ALL) ALL (add this line) -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --#Remember to turn off write permissions for sudoers files, default is read-only
chmod -v u-w /etc/sudoers
Copy the code
  1. User switching
#Su user name: - changes the working directory. - is short for -l. You do not need to enter a user name to switch to the root user
su - eric
#Next, prepare for SSH connection (of course, you can use the console of Ali Cloud)

Copy the code
  1. Sshd_conf File that controls remote connections
Vi/etc/SSH/sshd_config -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- PermitRootLogin no / / stop the root user login AllowUsers Eric // Disable user password from SSH login. If set to no, the certificate is not configured correctly and you cannot login. Don't panic ali cloud VNC remote connections can make back ha ha PasswordAuthentication no -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --Copy the code

SSH configuration

1 Creating a Key

#The window systemSystem disk/Users / $(yourusername) /. SSH#MAC system
cd /Users/$(yourusername)/.ssh
#Run the command to create the key pair, press Enter, do not enter the key password (of course, enter the password is up to you)
ssh-keygen -t rsa -f $(yoursshname) -C "[email protected]"

#SSH/if you want to copy the contents of the.ssh directory, then the.ssh directory is ok
sudo chown -R username .ssh 
Copy the code

Resolution:

  • -t: specifies the key type. The default key is RSA and can be omitted.
  • -c: Sets comment text, such as email.
  • -f: specifies the name of the key file. If omitted, id_rsa and id_rsa.pub are generated by default

-f Specifies the file name of the key pair.

Because we need to create multiple key pairs to connect to the corresponding host, here I have created several: Giee-Eric, Aliyun-env, aliyun-nginx for connecting to gitee warehouse, connecting to Ali Cloud environment machine, and connecting to Ali Cloud static server

Pub, aliyun-env, aliyun-env. Pub, aliyun-nginx, aliyun-nginx, aliyun-nginx. To copy to the connected host

Example: this is a successful creation, remember to run in your own SSH directory to create the creation, and press enter without typing passphrase

➜  .ssh ssh-keygen -t rsa -f aliyun-env -C "[email protected]"
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in aliyun-env.
Your public key has been saved in aliyun-env.pub.
The key fingerprint is:
SHA256:ePnhkUiIHTy8zsdpX1F9MKtC+PJK2F768PjEYuoUFgc [email protected]
The key's randomart image is:
+---[RSA 2048]----+
|     oE       oo |
|     o++ .    .oo|
|    . +o+ .  .. .|
|      .= = ...   |
|     o+.S.* ..   |
|     .o==* +.    |
|      oo*.B.     |
|     . = X.      |
|     .o =o+      |
+----[SHA256]-----+

Copy the code

2 Configuring a Public Key

Copy the created public key aliyun-env.pub

 #Cat the contents of the public key aliyun-env.pub and copy it
 .ssh cat aliyun-env.pub
Copy the code

Now go log in to your environment deployment host and log in to the newly created user Eric

#Check whether the user's working directory is displayed
pwd 
/home/eric
#Create.ssh directory in /home/eric. Set permission 700
mkdir ~/.ssh
chmod 700 ~/.ssh
#Create an SSH authentication file and set permissions 644
vi ~/.ssh/authorized_keys
chmod 644 ~/.ssh/authorized_keys
-----------------------------------------------
#Here is a copy of your original public keySSH - rsa askdlajsdkl what is copy anyway -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --Copy the code

SSH login configuration file: /etc/ssh/sshd_config, which allows users like Eric to log in remotely

Sudo vi/etc/SSH/sshd_config -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- PermitRootLogin yes / / control the root user login or not PasswordAuthentication yes // Block user password SSH login ⚠️⚠️⚠️ If set to no, the certificate is not configured properly. You can't log in hahaha, Don't panic, ali VNC remote connections can make back ha ha RSAAuthentication yes PubkeyAuthentication yes -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --Copy the code

Save the Settings and exit from restarting the SSHD service

sudo systemctl restart sshd
Copy the code

3 Connect the remote host

Now back on the local PC, configure the SSH configuration file. SSH /conf (create one if you don’t have one) and configure the SSH host

Host env
	HostName 47.115.11.abc
	User eric
	Port 22
	IdentityFile ~/.ssh/aliyun-env
Copy the code

Configure the test connection

ssh env
#The first connection will prompt you to add to the known_hosts trust, select Yes

#If you see this, it's connectedLast Login: Sun Aug 2 03:12:27 2020 from 113.110.38.101 Welcome to Alibaba Cloud Elastic Compute Service! [eric@jenkins-t ~]$Copy the code

Disable root login and password login after successful connection

Sudo vi/etc/SSH/sshd_config -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- PermitRootLogin no / / control the root user login or not AllowUsers Eric // AllowUsers to log in using SSH, here we add user Eric PasswordAuthentication no // block user password SSH login ⚠️⚠️⚠️ if set no, the certificate is not configured properly, you will not log in hahaha, Don't panic, ali VNC remote connections can make back ha ha RSAAuthentication yes PubkeyAuthentication yes -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --Copy the code

If you use a MAC, you can use Alfred’s SSH Workflow to start a connection to a remote host. See Alfred’s Integration with SSH +iTerm2 in this article to configure the address

Generally speaking, for machines in pre-release and production environments, you need to set up a bastion, get a password, a captcha, and so on before you log in

Installation and Use process

The process still needs to be talked about first, so that we understand the general order of the route

  1. Docker: With the installation of Docker, our entire set of services will be carried in each Docker container
  2. Jenkinsci/BlueOcean: Responsible for CI/CD tasks: automatic packaging, construction, deployment and a series of automatic tasks (for the visualization of pipeline, we have pulled this image, actually Jenkins installed blueOcean plug-in)
  3. Verdaccio/verdaccio: NPM library
  4. Mongo: database, here for yAPI
  5. Yapi: We have to make this image
  6. Docker-compose: docker-compose will be used for unified control: installation and deletion, start and stop, container network connection, etc

Docker

Yum set

Let’s use yum to install Docker

  1. Backup your original image file first, in case of error can be restored
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
Copy the code
  1. Download wget, create directory, download Aliyun YUM source configuration
yum install wget -y
mkdir -p  /etc/yum.repos.d
#Note that this is for your centos version,
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
#Centos-7 is equal to seven,
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
Copy the code
  1. Update the YUM cache
#Clear the original YUM source cache
yum clean all
#Generate aliyun YUM source cache
yum makecache
Copy the code
  1. Upgrade the local YUM package
yum update
Copy the code
  1. Install yum-utils, which provides yum-config-manager for managing yum sources
yum install -y yum-utils device-mapper-persistent-data lvm2
Copy the code
  1. Install aliyun Docker source
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Copy the code
  1. Update yum Index
yum makecache fast
yum clean all
Copy the code

Docker installation

  1. Install the docker
yum -y install docker-ce
#If something goes wrong, run two commandsYum wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm - y Install. / containerd. IO - 1.2.13-3.1 el7. X86_64. RPMCopy the code
  1. View the Docker version
docker -v
Copy the code
  1. Add your docker user to the docker user group, then log out of the docker user again to take effect, this is to avoid future docker operations always sudo
sudo usermod -aG docker $(yourname)
Copy the code
  1. Start the Docker service
sudo service docker start
Copy the code
  1. Start the Docker service
systemctl enable docker.service
Copy the code

Docker basic operation commands

mirror

Docker pull images_name docker pull images_name docker images Images_name /image_id # Delete a local imageCopy the code

Container commands

Docker ps # check the current status of the docker ps a # check the existing status of the docker stop docker start docker restart docker rm Docker logs [options] container_id/container_name Docker exec it container_id/container_name [/bin/bash] Docker commit Container_id /container_name #Copy the code

The operation of the volume

Volume docker volume rm volume_id/volume_name # Delete all volume docker volume prunesCopy the code

[Docker Compose takeover] [Docker Compose takeover] [Docker Compose takeover]

#View the networks that exist in docker
docker network ls

#View network details
docker network inspect

#Custom network (bridge type by default)
docker network create front-net

#Add containers web1 and web2 to the network, so that containers web1 and web2 can ping each other with this name. DNS resolution will be performed automatically
docker network connect front-net web1
docker network connect front-net web2

#disconnect
docker network disconnect front-net web1
docker network disconnect front-net web2
Copy the code

The next article

In the next one we will start building the Jenkins CI/CD service with Docker

Click here to go to the next post

Get the complete mini-manual

Concern public number: “front-end small manual”, reply: small manual

You can get the download of the PDF version of the resource

Typora’s night theme is like this:

Nuggets here I will be on as soon as possible!

Thank you for your attention

Finally, I hope I can get your attention ~~

Your small attention is our great power ah, we will continue to push original and good articles to you

This is the QR code of our official account