preface
Following on from the previous article, implement a CLI project scaffolding to complete project initialization.
This article focuses on continuous integration CI/CD after development is complete.
Main functions: When the code is pushed to the Github repository, it is automatically packaged and deployed to the corresponding server.
You can actually do that with Jenkins.
Why a docker? For X.
Haha, just kidding. Jenkins’ website also recommends that we do this. The main aim is to improve the speed and consistency of automated tasks.
Docker installation
I use Ali Cloud centos.
Before installing Docker Engine-Community, configure the dependencies:
# Dependent
$ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
# Set stable warehouse
$ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
# installation docker
sudo yum install docker-ce docker-ce-cli containerd.io
Copy the code
Under normal circumstances, the above few lines will install Docker. At this point, you can use:
docker --version
Copy the code
Verify that Docker was installed successfully.
Different operating systems to install Docker, you can refer to the novice tutorial.
Before moving on, a quick primer on Docker:
What does Docker do
Docker can be seen as a lightweight virtual machine.
Instead of emulating a complete operating system, the Docker container isolates processes. In other words, put a protective layer over the normal process. For the process in the container, the various resources it touches are virtual, thus achieving isolation from the underlying system.
Advantages over virtual machines:
- Start the fast
- Low resource usage
- Small volume
Docker composition
A complete Docker component:
Docker Daemon 3. Docker Image 4. DockerContainerCopy the code
DockerClient client
Docker adopts C/S architecture (client-server). The remote API is used to manage and create the Docker container, and the command line we use is actually the Docker client. We use the shell to notify the server to operate the Docker container.
Docker Daemon Daemon
The Docker Daemon serves as a server that receives requests from clients and processes them (creating, running, and distributing containers).
Image Image file
Docker packages the application and its dependencies in an image file. Only from this file can the Docker container be generated. The image file can be thought of as a template for the container. Docker generates an instance of the container based on the image file. Multiple container instances can be generated from the same image file.
DockerContainer container
The container generates the corresponding environment based on the image file. To:
- Run the service
- The test software
- Continuous integration
- .
Jenkins installation
As we know above, the creation of a Docker container depends on the image file. According to the Jenkins website, it recommended that we use their customized Jenkinsci/BlueOcean for our image files.
Specific operations:
docker run \
--rm \
-u root \
-p 8080:8080 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$HOME":/home \
jenkinsci/blueocean
Copy the code
Parse the above command:
- –rm: Destroy the container when it has finished running
- -u root: Indicates the root permission
- -p 8080:8080: maps dockerport 80 to the local port 80 for access
- -v jenkins-data:/var/jenkins_home: Map the /var/jenkins_home directory in the container to the Docker volume and name it Jenkins-data. If the volume does not exist, the docker run command will automatically create a volume for you.
- Jenkinsci/BlueOcean: Use this file as the container image file, if not, Docker will download it.
Not surprisingly, at this point you access your local port 80 (not recommended to use port 80, it is likely to be occupied, recommended to change to a less commonly used port), you can see the following page:
The above password is in the log of your previous command line, just copy the symbol two lines before the asterisk.
You can install the plug-in as recommended. Jenkins installation is complete.
Configure the front-end project packaging deployment environment
Install plug-in:
-
Publish over SSH Is used to connect to a remote server
-
The Deploy to Container plug-in is used to publish a packaged application to a remote server
-
NodeJS Plugin does not explain
Manage Jenkins -> Configure System
Configure SSH for the remote server:
-
Passphrase Password of the server
-
Path to key Specifies the Path of the key file for connecting to the remote server
-
Key Specifies the contents of the Key file
-
Name Indicates the Name of the user-defined server
-
HostName Server IP Indicates the external IP address
-
UserName indicates the UserName of the server
-
Remote Directory Directory for transferring files
Manage Jenkins -> Global tool configuration
Configure the Node environment:
New project
With the above system configuration completed, we can begin our project configuration.
The new item
->
The free style
->
Choose Git for source management
->
Build triggers (scheduled build, polling SCM: set time, polling Github repository code changes to execute build deployment)
The above schedule is to configure the polling interval, check the Github repository every 5 minutes to see if any changes have been made, and execute the following shell commands to complete the packaging and deployment.
->
Build environment (Node)
->
Build an execution shell
cd /var/jenkins_home/workspace/mason-test Enter the Jenkins workspace project directory
node -v # Detect node version (this command is not necessary)
npm -v # Check the NPM version (this command is not necessary)
cnpm install Install dependencies in the project
npm run build # packaged
tar -zcvf build.tar.gz build/ # zip, easy to transfer, I here package file name is build
Copy the code
->
Post-build operations:
Remove prefix Removes the prefix
Remote Directoty Directory published by Remote
Exec command Indicates the command that has been executed:
cd /root/dreamONE
tar -zxvf build.tar.gz
rm -rf build.tar.gz
Copy the code
At this point, a basic automated package is put up, the deployment shelf is set up, and the next steps are detailed tasks such as refining the project, logging, and so on.
The latter
With the project scaffolding: MasonEast – CLI,
Complete continuous integration CI/CD with Docker + Jenkins.
The next step is to add the front-end performance monitoring platform, by detecting:
- The first screen rendering
- Page stability
- API calls
To complete the basic performance monitoring, so as to develop optimization strategies.