This article has participated in the good Article call order activity, click to view:Back end, big front end double track submission, 20,000 yuan prize pool for you to challenge!

This is the 108 unwatered original, want to get more original good articles, please search the public number to pay attention to us ~ this article first in the political cloud front blog: How to build suitable for their team to build the deployment platform

The existing build and deployment solutions in the front-end industry are commonly used, such as Jenkins, Docker and GitHub Actions. It happens that our company now has the first two solutions. Since we already have stable build and deployment methods, why do we need to build our own build platform for the front-end? Of course not for fun ah, let me slowly analyze the reason.

Front-end builds can run into various problems when used, such as:

  • Eslint skips validation — Front-end projects within a company may have different styles over time and at different stages, and validation rules may not always be the same, although projects themselves can have various Eslint, Stylelint, and validation intercepts. But that doesn’t stop developers from skipping these code checks.
  • NPM version upgrade incompatibilities – Some compatibility checks are required for dependent NPM versions. If some NPM plug-ins are suddenly upgraded to incompatible versions, the code will report an error when it goes live, typically with various IE compatibilities.
  • I cannot freely add the functions I want — I want to optimize the process of front-end construction, or optimize the functions convenient for front-end use, but because I rely on the operation and maintenance platform to build applications, I want to add my own functions and need to wait for others to schedule.

And these problems, if you have their own building platform, this will not be a problem, so there is now – cloud long.

Why named “long cloud”, of course, is to hope that this platform can be like “long cloud”, a man when closed wan Fu mo open. What kind of capabilities does cloud length provide?

Cloud long ability

Build the deployment

While this is certainly a basic requirement, The Cloud Manager provides the ability to build different types of front-end projects in the company, such as Pampas, React, Vue, Uniapp, etc. The entire process is not complicated, started to build, long service side of cloud, to construct the project name, branch, to deploy the information such as the environment, begin to code updates of the project, relying on installation, after the code package, finally will generate code and then packaged into image file, and then upload the image to image after warehouse, In addition, some static files of resources of the project can be uploaded to CDN to facilitate the call after the front end. Finally, the image deployment service of K8S is called to carry out the image deployment according to the environment, and an online construction and deployment process is completed.

Pluggable build process

If you use someone else’s building platform, many of the scripts you want to add to the front end will rely on someone else’s service to achieve, and if you go to the cloud, you can provide an open interface, so that the front end can customize its own plug-in services.

For example, the online build packaging process can deal with some of the previously mentioned problems, pain points, such as:

  • Eslint, Tslint and other types of code compliance verification, no longer afraid of being skipped the inspection step.
  • Before the project is built, you can also do NPM package version detection to prevent the compatibility error after the code goes online, and so on.
  • The code can also be packaged to do some global front-end resource injection, such as buried points, error monitoring, message push, and so on.

Review and Release process

The company’s existing platform release process control is based on the maintenance of the list of operation and maintenance, each project will manage a list of people who can be released, so the basic project release needs to release the night to follow the release, and the cloud long in order to solve this problem, provides a concept of audit flow.

When the project after the completion of a pretest environmental testing, code developers can mention a true line launch apps, can release the person after the project will be received by nailing a need to review the application form, can through the web side, or nailing message direct manipulation, consent or refuse to apply the release, in the application after agreed to, When the developer is ready to publish, he or she will be able to deploy the project and publish the True Line. After publishing the true Line, a Merge Request Request will be created for the project to facilitate the archiving of subsequent code.

The advantage of doing this is that on the one hand, the front-end can control the permission of project construction and release, so that the release permission can be collected, on the other hand, it can also liberate the project publisher, so that developers can more easily carry out code online, and open the release of the project.

Capacity output

Cloud can be exported some long build update ability, also let a third party plug-in became may access the build process, we close for developers VsCode plug-in, let you can be free in the process of development of code updates, save the time for open a webpage for building and never leave home, in the editor for the construction of code updates, The common environment provides a shortcut to update with one click, further eliminating these intermediate operations time, this time is not more happy to write two lines of code.

Our VsCode plug-in not only provides some cloud-long building capabilities, but also small program building, route lookup, and more. If you are looking forward to sharing this plug-in, please look forward to our future articles.

Long cloud architecture

As mentioned above, the construction process of cloud manager depends on the ability of deploying images provided by K8S. Both the client and server of cloud manager are services running In Docker, so the design scheme of Cloud manager is adopted by Docker In Docker. That is, a Docker image is packaged by a Docker service.

In view of the building code, long cloud server part introduced the processing of the process pool, each in a cloud in the long process of building project is in the pool, a separate instance has an independent packaging process, and the progress of the packaging process follow up is by Redis query to the timing of tasks, also realized the long cloud multi-instance parallel building architecture.

The communication between the cloud client and the server is normal HTTP request and Websocket request. After the client initiates a request, the server stores some data such as application, user and construction information through MySQL data.

As for external resource interaction, some static resources and packaged images will be uploaded to CDN and image warehouse during the construction process. Finally, deployment interface of K8S will be called for project deployment operation.

Front-end build 0-1

I have seen the “cloud long” some function introduction, and “cloud long” architecture design, I believe many friends also want to do a similar to the “cloud long” front-end construction publishing platform, that need to do it, with me to look at the front-end construction platform main module design ideas.

The build process

The main core component of the front-end build platform is certainly the build package, and the build deployment process can be broken down into the following steps:

  • After each build starts, you need to save some information about the build. Therefore, you need to create a release record. The release record stores the release information of the release, such as the name of the release project, branch, commitId, commit information, operator data, and the release environment that needs to be updated. You’ll need a build release record, and if you need some data on the project and the operator, you’ll need to apply tables and user tables to store that data for association.
  • After the build release record is created, the front-end build process begins, and the build process canpipelineThe process can refer to the following example
  // Build process
  async run() {
    const app = this.app;
    const processData = {};
    const pipeline = [{
      handler: context= > app.fetchUpdate(context), // Git update the code
      name: 'codeUpdate'.progress: 10 // Here is the current build progress
    }, {
      handler: context= > app.installDependency(context), // NPM install Installs dependencies
      name: 'dependency'.progress: 30
    }, {
      handler: context= > app.check(context), // Pre-validation of builds (not required) : code detection, ESLint, package.json versions, etc
      name: 'check'.progress: 40
    }, {
      handler: context= > app.pack(context), // NPM run build packaging logic, if there are other project types, such as gulp, can also be handled in this step
      name: 'pack'.progress: 70
    }, {
      handler: context= > app.injectScript(context), // Post-build steps (not required) : resource injection after packaging
      name: 'injectRes'.progress: 80
    }, { // docker image build
      handler: context= > app.buildImage(context), // Generate the Docker image file, upload the image to the repository, and then call the K8S capability for deployment
      name: 'buildImage'.progress: 90
    }];
    // Loop through each step of the build process
    for (let i = 0; i < pipeline.length; i++) {
      const task = pipeline[i];
      const [ err, response ] = await to(this.execProcess({ ... task,step: i
      }));
      if(response) { processData[task.name] = response; }}return Promise.resolve(processData);
  }
  // Perform the handler operations in the build
  async execProcess(task) {
    this.step(task.name, { status: 'start' });
    const result = await task.handler(this.buildContext);
    this.progress(task.progress);
    this.step(task.name, { status: 'end'.taskMeta: result });
    return result;
  }
Copy the code
  • Build steps, build some processes above, compared to you also want to know how to run some scripts in the server build process, in fact, the idea is throughnodechild_process Module executes shell scripts. Here are some examples of code:
import { spawn } from 'child_process';
// git clone 
execCmd(`git clone ${url} ${dir}`, {
  cwd: this.root,
  verbose: this.verbose
});
// npm run build
const cmd = ['npm run build', cmdOption].filter(Boolean).join(' ');
execCmd(cmd, options);
// Run the shell command
function execCmd(cmd: string, options:any = {}) :Promise<any> {
  const [ shell, ...args ] = cmd.split(' ').filter(Boolean);
  const{ verbose, ... others } = options;return new Promise((resolve, reject) = > {
    let child: any = spawn(shell, args, others);
    let stdout = ' ';
    let stderr = ' ';
    child.stdout && child.stdout.on('data'.(buf: Buffer) = > {
      stdout = `${stdout}${buf}`;
      if (verbose) {
        logger.info(`${buf}`); }}); child.stderr && child.stderr.on('data'.(buf: Buffer) = > {
      stderr = `${stderr}${buf}`;
      if (verbose) {
        logger.error(`${buf}`); }}); child.on('exit'.(code: number) = > {
      if(code ! = =0) {
        const reason = stderr || 'some unknown error';
        reject(`exited with code ${code} due to ${reason}`);
      } else {
        resolve({stdout,  stderr});
      }
      child.kill();
      child = null;
    });
    child.on('error'.err= > {
      reject(err.message);
      child.kill();
      child = null;
    });
  });
};
Copy the code
  • And for example we want to add before we buildEslintValidation operations can also be added to the build process, so that intercepting validation can be added to the online build to control the quality of the online build code.
import { CLIEngine } from 'eslint';
export function lintOnFiles(context) {
  const { root } = context;
  const [ err ] = createPluginSymLink(root);
  if (err) {
    return [ err ];
  }
  const linter = new CLIEngine({
    envs: [ 'browser'].useEslintrc: true.cwd: root,
    configFile: path.join(__dirname, 'LintConfig.js'),
    ignorePattern: ['**/router-config.js']});let report = linter.executeOnFiles(['src']);
  const errorReport = CLIEngine.getErrorResults(report.results);
  const errorList = errorReport.map(item= > {
    const file = path.relative(root, item.filePath);
    return {
      file,
      errorCount: item.errorCount,
      warningCount: item.warningCount,
      messages: item.messages
    };
  });
  const result = {
    errorList,
    errorCount: report.errorCount,
    warningCount: report.warningCount
  }
  return [ null, result ];
};
Copy the code
  • After the build is deployed, you can update the update status information of the build record based on the build statusDockerAfter the image warehouse is uploaded, the mirror information needs to be recorded so that the previously built mirror can be updated or rolled back later. Therefore, a mirror table needs to be addedDockerImage generated by some instance code.
import Docker = require('dockerode');
// Make sure the server has a basic dockerFile image file
const docker = new Docker({ socketPath: '/var/run/docker.sock' });
const image = 'Image package name'
let buildStream;
[ err, buildStream ] = await to(
  docker.buildImage({
    context: outputDir
  }, { t: image })
);
let pushStream;
// authConfig Specifies some authentication information for the image repository
const authconfig = {
  serveraddress: "Mirror Warehouse address"
};
// Push an image to the remote private repository
const dockerImage = docker.getImage(image);
[ err, pushStream ] = await to(dockerImage.push({
  authconfig,
  tag
}));
// 3s Prints the progress information once
const progressLog = _.throttle((msg) = > logger.info(msg), 3000); 
const pushPromise = new Promise((resolve, reject) = > {
  docker.modem.followProgress(pushStream, (err, res) = > {
    err ? reject(err) : resolve(res);
  }, e= > {
    if (e.error) {
      reject(e.error);
    } else {
      const { id, status, progressDetail } = e;
      if(progressDetail && ! _.isEmpty(progressDetail)) {const { current, total } = progressDetail;
        const percent = Math.floor(current / total * 100);
        progressLog(`${id} : pushing progress ${percent}% `);
        if (percent === 100) { // The progress is completeprogressLog.flush(); }}else if (id && status) {
        logger.info(`${id} : ${status}`); }}}); });await to(pushPromise);
Copy the code
  • Each build needs to save some information such as the build progress and logs. You can add a log table to save the logs.

Running of multiple build instances

At this point a project’s build process is running smoothly, but a build platform can’t build and update one project at a time, so introduce a process pool that allows your build platform to build multiple projects at once.

Node is a single-threaded model. When multiple independent and time-consuming tasks need to be executed, tasks can only be distributed through child_process to improve the processing speed. Therefore, a process pool needs to be implemented to control the operation of multiple build processes. When the child process completes the task, it continues to add new child processes through the task queue of the process, so as to control the running of the concurrent process. The process implementation is as follows.

ProcessPool. Ts the following is the ProcessPool code.

import * as child_process from 'child_process';
import { cpus } from 'os';
import { EventEmitter } from 'events';
import TaskQueue from './TaskQueue';
import TaskMap from './TaskMap';
import { to } from '.. /util/tool';
export default class ProcessPool extends EventEmitter {
  private jobQueue: TaskQueue;
  private depth: number;
  private processorFile: string;
  private workerPath: string;
  private runningJobMap: TaskMap;
  private idlePool: Array<number>;
  private workPool: Map<any.any>;
  constructor(options: any = {}) {
    super(a);this.jobQueue = new TaskQueue('fap_pack_task_queue');
    this.runningJobMap = new TaskMap('fap_running_pack_task');
    this.depth = options.depth || cpus().length; // Maximum number of instance processes
    this.workerPath = options.workerPath;
    this.idlePool = []; // The pid array of the worker process
    this.workPool = new Map(a);// Work instance process pool
    this.init();
  }
  / * * *@func Init Initializes the process, */
  init() {
    while (this.workPool.size < this.depth) {
      this.forkProcess(); }}/ * * *@func ForkProcess fork child process, create task instance */
  forkProcess() {
    let worker: any = child_process.fork(this.workerPath);
    const pid = worker.pid;
    this.workPool.set(pid, worker);
    worker.on('message'.async (data) => {
      const { cmd } = data;
      // Return the log status based on CMD status or clean up the task queue after completion
      if (cmd === 'log') {}if (cmd === 'finish' || cmd === 'fail') {
        this.killProcess();// Clear the task}}); worker.on('exit'.() = > {
      // After that, clear the instance queue and start the next task
      this.workPool.delete(pid);
      worker = null;
      this.forkProcess();
      this.startNextJob();
    });
    return worker;
  }
  // According to the task queue, get the next instance to be carried out, start the task
  async startNextJob() {
    this.run();
  }
  / * * *@func Add Adds a build task *@param The builder that the task runs */
  async add(task) {
    const inJobQueue = await this.jobQueue.isInQueue(task.appId); // Task queue
    const isRunningTask = await this.runningJobMap.has(task.appId); // The running task
    const existed = inJobQueue || isRunningTask;
    if(! existed) {const len = await this.jobQueue.enqueue(task, task.appId);
      // Execute the task
      const [err] = await to(this.run());
      if (err) {
        return Promise.reject(err); }}else {
      return Promise.reject(new Error('DuplicateTask')); }}/ * * *@func InitChild starts the build task *@param Child process references *@param ProcessFile The builder file to run */
  initChild(child, processFile) {
    return new Promise(resolve= > {
      child.send({ cmd: 'init'.value: processFile }, resolve);
    });
  }
  / * * *@func StartChild Starts the build task *@param Child process references *@param Task Build task */
  startChild(child, task) {
    child.send({ cmd: 'start', task });
  }
  / * * *@func Run Starts the queue task running */
  async run() {
    const jobQueue = this.jobQueue;
    const isEmpty = await jobQueue.isEmpty();
    // There are idle resources and the task queue is not empty
    if (this.idlePool.length > 0 && !isEmpty) {
      // Get the idle build child instance
      const taskProcess = this.getFreeProcess();
      await this.initChild(taskProcess, this.processorFile);
      const task = await jobQueue.dequeue();
      if (task) {
        await this.runningJobMap.set(task.appId, task);
        this.startChild(taskProcess, task);
        returntask; }}else {
      return Promise.reject(new Error('NoIdleResource')); }}/ * * *@func GetFreeProcess gets the idle build child */
  getFreeProcess() {
    if (this.idlePool.length) {
      const pid = this.idlePool.shift();
      return this.workPool.get(pid);
    }
    return null;
  }
  
  / * * *@func KillProcess Kills a child process because it frees up memory used by the build run *@param Pid Process PID */
  killProcess(pid) {
    let child = this.workPool.get(pid);
    child.disconnect();
    child && child.kill();
    this.workPool.delete(pid);
    child = null; }}Copy the code

Build.ts

import ProcessPool from './ProcessPool';
import TaskMap from './TaskMap';
import * as path from 'path';
// Log storage
const runningPackTaskLog = new TaskMap('fap_running_pack_task_log');
// Initialize the process pool
const packQueue = new ProcessPool({
  workerPath: path.join(__dirname, '.. /.. /task/func/worker'),
  depth: 3
});
// Initialize the build file
packQueue.process(path.join(__dirname, '.. /.. /task/func/server-build'));
let key: string;
packQueue.on('message'.async data => {
  // Set the redis cache key based on the project ID, deployment record ID, and user ID, and then store the log
  key = `${appId}_${deployId}_${deployer.userId}`;
  const { cmd, value } = data;
  if(cmd === 'log') { // Build the task log
    runningPackTaskLog.set(key,value);
  } else if (cmd === 'finish') { // The build is complete
    runningPackTaskLog.delete(key);
    // Subsequent logs can be stored in the database
  } else if (cmd === 'fail') { // Build failed
    runningPackTaskLog.delete(key);
    // Subsequent logs can be stored in the database
  }
  // Progress can be synchronized to the foreground through webSocket
});
// Add a new build task
let [ err ] = awaitto(packQueue.add({ ... appAttrs,// Build the required information
}));
Copy the code

With the process pool to handle multi-process build, how to record the build progress of each process? I choose to use Redis database to cache the build progress status, and synchronize the foreground progress display through Websocket. After the build is completed, the log will be stored locally. The above code simply introduces the process pool implementation and use, of course, the specific application depends on their own design ideas, with the help of the process pool, the rest of the train of thought is actually specific code implementation.

The future of front-end builds

Finally, let’s talk about some ideas about the future of front-end builds. First of all, front-end builds must ensure a more stable build, in the premise of stability, to achieve faster build, for CI/CD direction, such as more complete build flow, after the update generated online environment, automatic processing of code archiving, The latest archived Master code is reintegrated into the development branches, updated to the entire test environment, and so on.

For the performance of a service, we considered the cloud can be build their ability to rely on every development of the computer to complete, realize the local building, the cloud deployment of offshore the cloud, build the server pressure dispersed to their computer, it also can reduce the pressure of a server to build, the server you just do the final deployment of services.

For example, our development students really want the ability to package and release projects by group dimension. In a release, select the projects and release branches to be updated together and release updates in a unified manner.

summary

Therefore, with their own construction and release platform, they want to be able to operate their own functions, you can do the front end of the various functions they want, isn’t it wonderful. I guess many students may be interested in we do VsCode plug-ins, in addition to build the project, of course, there are some other functions, such as the company’s test account management, small program to quickly build and so on the function of the secondary development, whether want to further understand the function of the plugin, please look forward to after we share.

Reference documentation

The node child_process document

In-depth understanding of Node.js processes and threads

Brief analysis of Node processes and threads

Recommended reading

The most familiar stranger RC-form

Vite features and partial source parsing

How do I use Git at work

Serverless Custom (Container) Runtime

Open source works

  • Politics in front of tabloids

Open source address www.zoo.team/openweekly/ (there is a wechat group on the homepage of the official website of the tabloid)

, recruiting

ZooTeam (ZooTeam), a young and creative team, belongs to the product RESEARCH and development department of ZooTeam, based in picturesque Hangzhou. The team now has more than 40 front end partners, the average age of 27 years old, nearly 30% are full stack engineers, no problem youth storm team. The membership consists of “old” soldiers from Alibaba and netease, as well as fresh graduates from Zhejiang University, University of Science and Technology of China, Hangzhou Electric And other universities. In addition to the daily business connection, the team also carried out technical exploration and actual practice in the direction of material system, engineering platform, building platform, performance experience, cloud application, data analysis and visualization, promoted and implemented a series of internal technical products, and continued to explore the new boundary of the front-end technology system.

If you want to change the things you’ve been doing, you want to start doing things. If you want to change, you’ve been told you need to think more, but you can’t change; If you want to change, you have the power to achieve that result, but you are not needed; If you want to change what you want to accomplish, you need a team to support you, but there is no place for you to bring people; If you want a change of pace, it’s “3 years of experience in 5 years”; If you want to change the original savvy is good, but there is always a layer of fuzzy window paper… If you believe in the power of belief, that ordinary people can achieve extraordinary things, that you can meet a better version of yourself. If you want to get involved in the growth of a front end team with a deep understanding of the business, a sound technology system, technology that creates value, and spillover impact as the business takes off, I think we should talk about it. Any time, waiting for you to write something, to [email protected]