Front-end pain points in extensive development mode
In recent years, front-end services have ushered in a good development, but the front-end infrastructure has not kept up with the rapid expansion of front-end services. After business expansion, we can no longer carry out extensive development like a small workshop: how to quickly standardize the initial project before development? How to ensure efficient cooperation in development? How to ensure correct and fast launch after completion of development? How to manage the stable operation of various services after the launch? Around these problems, I list some related infrastructures: The construction of a complete packaging processes/services (unified scaffolding, on-line service, etc.), complete test environment, the front-end error log management system (collection, statistics, report to the police), the front-end resource offline management, front delta download service resources and in view of the Node application log (complete invocation chain), performance and fault monitoring platform, and so on.
Among them, for the front-end business before the launch, we have been such a pain points: based on the existing project continues to develop, after the completion of the local development, need to start a local service, preview to PM check check, but sometimes PM or other team personnel want to see the effect, and is not convenient to operate a computer, you are always need to coordinate time. If the completion of their own development, you can directly online to a test environment, the link to the group, it will be very convenient for others to preview the effect at any time.
Therefore, the goal of this paper is: for front-end projects with separated front and back ends, after Git push, it can be directly pushed to the test environment and the effect can be previewed online.
Gitlab CI introduction
Since V 7.1.2, Gitlab supports ci /CD through configuration. Gitlab-ci.yaml files support CI/CD. Specific configuration can reference documentation: docs.gitlab.com.cn/ee/ci/yaml/
To enable the project to execute tasks configured in the YAML file, we also need to first deploy and install the corresponding environment: Install Runner && Register Runner, see docs.gitlab.com/runner/#usi… . There are many ways of execution for runner, and the most popular one is as a Docker container, which integrates some basic environment of GitLab internally. In the registration stage, it is associated with the main task of GitLab (Runner usually does not deploy on the same server with GitLab server). The task configured in YAML, on the other hand, is executed in Runner and sent back to the GitLab server.
Finally, the project needs to be opened in setttings enable shared Runner or specific Runner.
Build front-end business preview service based on Node
Node is used to set up services, host static resources, and forward proxy requests.
The basic flow
- git push
- Runner executes tasks in YAML
- Resources to build
Package for test environment:npm run build -e test
- Upload resources to the Node server.
Remove the service as an NPM package and runfestaging-scripts
Command to upload two types of resources:- Static resources built
- The required request broker configuration (.festaging.config.js in the root directory is read by default, as explained below)
- Resources to build
The basic process is easy to understand, but given the limitations of the company’s existing infrastructure, some issues become more complicated:
-
Node service deployment is based on the company’s existing container management solution and supports dynamic expansion or destruction. Node service clusters do not have fixed IP addresses. You need to obtain all instance IP addresses and then upload static resources.
-
The corporate Load Balance service is centrally managed, and nGINx configuration does not support generic domain name resolution. Therefore, for different projects, do not share a secondary domain name (aa.xxx.com, bb.xxx.com), and only one domain name (festaging.xxx.com) can be shared. But in order to distinguish between different projects, we need to increase the path information, such as festaging.xxx.com/aa/branch1/, this leads to two questions:
- Increased difficulty of interface proxy: If universal domain name resolution is supported, the proxy can be implemented according to the information in the domain name for each project request (the actual domain name of the access address of the back-end interface will be configured for each project) :
aa.xxx.com/api/
->aa.config.origin/api/
; However, the interface address of each item request is still/API /**. How can node distinguish which item sends the request and forward it correctly? - Multi-route project support: After the static resource path is configured in Node, enter it in the browser
festaging.xxx.com/aa/branch1/
You can access the project’s home page, but when you click the button and switch routes, the url changes tofestaging.xxx.com/tab2
In the browser, you can only access the path through festaging.xxx.com/tab2, notfestaging.xxx.com/aa/branch1/tab2
The unity of the URL for the same item cannot be guaranteed.
- Increased difficulty of interface proxy: If universal domain name resolution is supported, the proxy can be implemented according to the information in the domain name for each project request (the actual domain name of the access address of the back-end interface will be configured for each project) :
Upload static resources
You need to obtain the addresses of all instances where the Node service is deployed, and then upload them.
- SCP: This is probably the most common transfer method between machines, but the first connection requires SSH authentication and requires a password written in plain text in the script, while the container connection password of the Node service is not known.
- Http-based network service transmission: The operation is simple, but the Node service needs to provide an upload interface
Use the latter as a solution:
- The script executed in runner is responsible for: build -> zip -> post to node service (this function is removed as NPM package, yamL configuration file script just executes the command corresponding to the NPM)
- The Node service provides the interface to accept the POST zip package, unzip it, and move it to a specified location.
PS: KOA’s async, await and stream operations always felt a bit tricky.
function pipe(from, to, options) {
return new Promise((resolve, reject) => {
from.pipe(to, options)
from.on('error', reject)
from.on('end', resolve)
})
}
async function processZipFiles(input, output) {
const reader = fs.createReadStream(input);
const upStream = fs.createWriteStream(output);
await pipe(input, output);
}
Copy the code
Interface proxy processing
Each project needs to specify its real back-end request domain name so that requests in the project can be forwarded. In addition to the initial, you need to support proxying certain interfaces to other specified addresses, as supported by WebPack Dev Server.
So we support two approaches,
- You can specify the proxy target as a JSON file with the content format as
{ proxyApi: { '/api/xx': 'https://www.baidu.com', 'default': 'https://v.qq.com' } } Copy the code
In this way, it is possible to continue to support later additions to the configuration in addition to the proxyApi, providing a margin for future business expansion.
- When script is executed, configure the parameters
--target "https://www.baidu.com''
How do I inject project-related information into an interface request?
Because all projects share the same domain name and distinguish different projects by the path, but the interface requests are spliced by domain name + interface, so we need to add the prefix about the project information in the interface for different projects: In view of the test environment, in the packaging, will its request interface address from/API/XXX/project1 / branch1 / API/XXX. But it is not practical to actually change every address in the file, and we cannot identify exactly where the prefix is needed. The way the front end makes network requests is XMLHttpRequest and FETCH, so we just need to rewrite their methods at the front of the HTML file.
function buildUrl(prefix) {}
var originXHROpen = XMLHttpRequest.prototype.open;
XMLHttpRequest.prototype.open = function (method, url, async, user, password) {
return originXHROpen.call(this, method, buildUrl(url, '${prefix}'), async, user, password);
};
if (window.fetch) {
var originFetch = window.fetch;
window.fetch = function () {
var input = arguments[0];
if (typeof input === 'string') {
arguments[0] = buildUrl(input, '${prefix}');
}
return originFetch.apply(this, arguments);
};
}
Copy the code
How is this configuration sent to the Node service?
- As a file, the Node side also reads the forwarding configuration from all files in this folder: Restart the Node service when the contents of this folder change.
- After receiving the request, the Node side modifies the variable in the memory: The node side maintains a proxy Config object, and modifies its content after receiving the request without restarting the Node service. The Config format is as follows:
{ project1: { `branch1`: { proxyAPI: { '/api/xx': 'https://www.baidu.com', 'default': 'https://v.qq.com' } } } } Copy the code
The latter is obviously a better solution.
Static resource, proxy Config instantiation
As mentioned above, static resources are sent directly to each instance. Proxy Config is also sent to each node instance and is then directly modified in memory. If the Node service is restarted and the Docker container is created, all of these things will be lost. So it needs to be statically stored, and when Node restarts the service, the initial value is read from there.
- Instantiation of static resources: as files, compressed into
${project}_${branch}.zip
Store it on the company’s unified file storage service. - Instantiation of proxy Config: Since config is really an object, it is stored in mongodb.
Support for multiple routing services
Because the team now uses the React stack uniformly, support for multiple routes is centered around react-router-dom. The usual route component is BrowserRouter or StaticRouter, and its basename parameter can be used to prefix url addresses, which is exactly the same as the project information we need in this article, so we can modify its basename parameter to support multiple routes.
- Fork a react-router-dom repository, alter its basename, and add alias to the react-router-dom resolve repository. We need to update fork’s repository synchronously, as well as possibly insufficient support for earlier versions.
- After webPack is built, add the plug-in, parse the AST, and check if it is used
BrowserRouter
orStaticRouter
Then change the value of basename to return the new code.
Choose plan 2. The parser API provided by WebPack4 is used to parse each module processed by WebPack, similar to usestrictplugin.js. After getting the AST, basename’s method was modified using Babel’s ‘traverse’ and ‘generate’ package generation.
“Reference”
What should we talk about when it comes to GITLab CI