Project benefits
- Overall development efficiency increased by 20%.
- Accelerate the rendering speed of the first screen, reduce the white screen time, and increase the page opening speed by 40% in weak network environment.
Weigh the
Consider the following before choosing to use SSR!
- SSR requires a server that can run Node.js, so learning costs are relatively high.
- For servers, you have to deal with a higher load than just serving static files, and consider issues like page caching.
- Two execution environments for one set of code. The beforeCreate and Created life cycles are executed on both the server side and the client side, which can cause problems if side-effect code or platform-specific apis are added to both environments.
It is recommended to understand the official documents before practice, so that you can have a certain understanding of VUE SSR.
First, set up a simple SSR service
Install dependencies
yarn add vue vue-server-renderer koa
Copy the code
Vue-server-renderer is the core module of vUE SRR server-side rendering, and we will use KOA to build the server.
const Koa = require('koa');
const server = new Koa();
const Vue = require('vue');
const renderer = require('vue-server-renderer').createRenderer();
const router = require('koa-router') (); const app = new Vue({ data: { msg:'vue ssr'
},
template: '<div>{{msg}}</div>'
});
router.get(The '*'Renderer. RenderToString (app, (err, HTML) => {// Render Vue to HTML and return renderer.if(err) { throw err; } ctx.body = html; })}); server.use(router.routes()).use(router.allowedMethods()); module.exports = server;Copy the code
Such a simple server-side rendering is achieved.
SSR specific implementation
Based on the above SSR service, the program will be gradually improved for practical application.
- The directory structure
App ├ ─ ─ the SRC │ ├ ─ ─ components │ ├ ─ ─ the router │ ├ ─ ─ store │ ├ ─ ─ index. The js │ ├ ─ ─ app. Vue │ ├ ─ ─ index. The HTML │ ├ ─ ─ ├─ ├─ custom.js // run ├─ custom.js // run ├─ custom.js // run ├─ custom.js // run ├─ custom.js // run ├─ custom.js //Copy the code
2. Due to the difference between server and client, different entry functions are required to implement. The two entry functions are entry-server.js and entry-client.js. Server side entry file:
import cookieUtils from 'cookie-parse';
import createApp from './index.js';
import createRouter from './router/router';
import createStore from'./store/store';
export default context => {
return new Promise((resolve, reject) => {
const router = createRouter();
const app = createApp({ router });
const store = createStore({ context });
const cookies = cookieUtils.parse(context.cookie || ' '); Router.push (context.url); // Set the location of the router on the server. / / wait until the router may be asynchronous components and hook function resolution through the router. The onReady (() = > {const matchedComponents. = the router getMatchedComponents ();if(! matchedComponents.length) {return reject(new Error('404')); } // Call asyncData to all matching routing components for data prefetch. Promise.all( matchedComponents.map(({ asyncData }) => { asyncData && asyncData({ store, route: router.currentRoute, cookies, context: { ... context, } }) }) ) .then(() => { context.meta = app.$meta;
context.state = store.state;
resolve(app);
})
.catch(reject);
}, () => {
reject(new Error('500 Server Error'));
});
});
}
Copy the code
Client entry file:
import createApp from './index.js';
import createRouter from './router/router';
export const initClient = () => {
const router = createRouter();
const app = createApp({ router });
const cookies = cookieUtils.parse(document.cookie);
router.onReady(() => {
if(window.__INITIAL_STATE__) { store.replaceState(window.__INITIAL_STATE__); } // Add a route hook function to handle asyncData. // Execute after the initial resolve route so that we do not double-fetch the existing data. // Use 'router.beforeresolve ()' to ensure that all asynchronous components are resolved. router.beforeResolve((to, from, next) => { const matched = router.getMatchedComponents(to); const prevMatched = router.getMatchedComponents(from); // We only care about non-pre-rendered components // so we compare them to find two different components that match the listlet diffed = false;
const activated = matched.filter((c, i) => {
returndiffed || (diffed = (prevMatched[i] ! == c)) });if(! activated.length) {return next()
}
Promise.all(activated.map(c => {
if(c.acsyncData) {// A function that transparently transmits cookies to the data prefetch. Cookies need to be manually transmitted to the back-end server during data prefetch.return c.asyncData({
store,
route: to,
cookies,
context: {
}
})
}
})).then(() => {
next()
}).catch(next)
});
app.$mount('#app')}); }Copy the code
Since the NodeJS server is a long-running process, when code enters the process, it takes a value and keeps it in memory, which results in requests sharing a single interest object. To avoid this problem, the program creates a different instance for each request by exposing a repetitively executed factory function.
import Vue from 'vue';
import App from './App.vue';
export default function createApp({ router }) {
const app = new Vue({
router,
render: h => h(App),
});
return app;
};
Copy the code
4. Automatically load router and Store modules. For a SPA project, since both router and Store are managed in a unified entry file, we split the relevant store and router of each functional module according to the needs of the project. When the project becomes larger, manual import modification will produce many side effects every time. To reduce the side effects of modifying store and Router entries, you need to automatically load the router and Store of your project. The following is the store implementation. The Router implementation is similar to store.
// store implementation //... // Use require.context to match all stores in the module and load them to the router at once. const storeContext = require.context('.. /module/'.true, /\.(\/.+)\/js\/store(\/.+){1,}\.js/);
// ...
const getStore = (context) => {
storeContext.keys().filter((key) => {
const filePath = key.replace(/^(\.\/)|(js\/store\/)|(\.js)$/g, ' ');
let moduleData = storeContext(key).default || storeContext(key);
const namespaces = filePath.split('/'); moduleData = normalizeModule(moduleData, filePath); store.modules = store.modules || {}; const storeModule = getStoreModule(store, namespaces); Vuex_properties.foreach ((property) => {mergeProperty(storeModule, moduleData[property], property); });return true;
});
};
export default ({ context }) => {
getStore(context);
returnnew Vuex.Store({ modules: { ... store.modules, }, }); };Copy the code
5. Webpack builds configuration
├─ Webpack.base.conf.js // General Config ├─ Webpack.client.conf.js // Client Pack Config ├─ Webpack.server.conf.js // Server Pack ConfigCopy the code
Webpack.base.conf.js is a common configuration for building projects and can be modified as needed. Here are the configurations of webpack.client.conf.js and webpack.server.conf.js.
The webpack.server.conf.js configuration uses the VueSSRServerPlugin plugin to generate the server bundle object, which by default is vue-ssR-server-bundle. json, which holds the entire output of the server.
const merge = require('webpack-merge');
const nodeExternals = require('webpack-node-externals');
const VueSSRServerPlugin = require('vue-server-renderer/server-plugin');
const path = require('path');
const baseConfig = require('./webpack.base.conf.js');
const resolve = (src = ' ') => path.resolve(__dirname, '/', src);
const config = merge(baseConfig, {
entry: {
app: ['./src/entry-server.js'],
},
target: 'node',
devtool: 'source-map',
output: {
filename: '[name].js',
publicPath: ' ',
path: resolve('./dist'),
libraryTarget: 'commonjs2'}, externals: nodeExternals({// tell Webpack not to bundle these modules or any of their submodules}), plugins: [new VueSSRServerPlugin(),]}); module.exports = config;Copy the code
Webpack.client.conf. js configures the client build similar to the server. VueSSRClientPlugin is used to generate the client build list vuuE-ssR-client-manifest.json. It contains all the static resources and dependencies required by the client. Therefore, resources and data prefetching can be automatically inferred and injected.
const VueSSRClientPlugin = require('vue-server-renderer/client-plugin');
const merge = require('webpack-merge');
const webpack = require('webpack');
const baseConfig = require('./webpack.base.conf');
const UploadPlugin = require('@q/hj-webpack-upload'); // Upload first loaded and on-demand resources to CDN (secondary development on open source basis) const path = require('path');
const resolve = (src = ' ') => path.resolve(__dirname, '/', src); const config = merge(baseConfig, { ... baseConfig, entry: { app: ['./src/entry-client.js'],
},
target: 'web',
output: {
filename: '[name].js',
path: resolve('./dist'),
publicPath: ' ',
libraryTarget: 'var',
},
plugins: [
new VueSSRClientPlugin(),
new webpack.HotModuleReplacementPlugin(),
new UploadPlugin(cdn, {
enableCache: true// Cache fileslogLocal: false,
src: path.resolve(__dirname, '.. ', Source.output),
dist: path.resolve(__dirname, '.. ', Source.output),
beforeUpload: (content, location) => {
if (path.extname(location) === '.js') {
return UglifyJs.minify(content, {
compress: true,
toplevel: true,
}).code;
}
return content;
},
compilerHooks: 'done', onError(e) { console.log(e); }}),]}); module.exports = config;Copy the code
5, SSR server side implementation is based on koA SSR server side, app.js is mainly to build the server environment, SSR implementation is in SSR. Js, through a form of middleware and the main program association.
// ssr.js //... // Render the bundle as a string. async render(context) { const renderer = await this.getRenderer();returnRenderer. renderToString(context, (err, HTML) => {if (err) {
reject(err);
} else{ resolve(html); }}); }); } // Get the renderer objectgetRenderer() {
returnNew Promise((resolve, reject) => {// Read the template file and the server-side and client-side JSON files previously generated by the build const htmlPath = '${this.base}/index.html`;
const bundlePath = `${this.base}/vue-ssr-server-bundle.json`;
const clientPath = `${this.base}/vue-ssr-client-manifest.json`;
fs.stat(htmlPath, (statErr) => {
if (!statErr) {
fs.readFile(htmlPath, 'utf-8', (err, template) => { const bundle = require(bundlePath); const clientManifest = require(clientPath); Const renderer = createBundleRenderer(bundle, {template, clientManifest, runInNewContext: const renderer = createBundleRenderer(bundle, {template, clientManifest, runInNewContext:false,
shouldPrefetch: () => {
return false;
},
shouldPreload: (file, type) = > {return false; }}); resolve(renderer); }); }else {
reject(statErr); }}); }); } / /... // app.js const Koa = require('koa');
const server = new Koa();
const router = require('koa-router') (); const ssr = require('./ssr'); server.use(router.routes()).use(router.allowedMethods()); server.use(ssr(server)); // Error handling app.on('error', (err, ctx) => {
console.error('server error', err, ctx);
});
module.exports = server;
Copy the code
The above is the simple implementation of VUE SSR, the actual project needs to improve the configuration of various projects. Here are a few questions on this basis.
- As mentioned above, only beforeCreate and Created are called for server-side rendering, and the application is always on the server and does not destroy code that causes side effects during these two life cycles. For example, using setTimeout or setInterval in a vUE can cause side effects. To avoid these problems, you can put the code that causes side effects in another lifecycle of the Vue. There is no window or Document object on the server side. If it is used on the server side, an error will be reported. Therefore, it is necessary to make corresponding compatible processing according to the running environment.
- Cookie penetration during prefetch. When the server asyncData prefetches data, the cookie in the client request is not carried. Therefore, you need to manually add the cookie in the client to the request header.
- In SPA, you need to dynamically modify the page’s head tag to facilitate search engines. Vue-meta is recommended here.
// src/index.js
// ...
Vue.use(Meta);
// ...
// entry-server.js
// ...
context.meta = app.$meta(a); / /...Copy the code
Deployment plan
After completing the overall code development, we also need to consider deployment issues. In the previous active SSR transformation, we used PM2 to manage Node process on each server by balancing external load to each server. There are some problems with this approach in practice.
-
Runtime environment
- Human operations. Manually configure the environment (Node, PM2) on the running server. If you need to expand capacity or update environment dependencies, manually synchronize the environment between servers.
- The local development environment must be consistent with the server environment. There were problems caused by inconsistencies. The probability is small but should be treated with caution
-
operations
- The rollback mechanism is now equivalent to releasing a new version online, retriggering the CI release process. If there is a problem in the operating environment, it is more difficult. There is no way to quickly roll back to the specified version and environment first.
To solve some of the problems mentioned above. We introduced new technical solutions.
-
Docker: Container technology. Lightweight, fast “virtualization” solution
-
Kubernetes: Container choreography scheme
Docker is used to access the whole development, production and packaging process to ensure the consistency of each operating environment.
Use Kubernetes for container choreography.
After integration, the general scheme flow is as follows
- Docker is used for local development
- Push code to Gitlab to trigger CI
- CI is packaged based on the base image, one for each COMMIT ID, pushed to a private repository, and the CD is triggered
- CD uses Kubectl to control K8s cluster update applications
Docker was used throughout development, packaging, and deployment to ensure a consistent environment for all phases.
Local development
In the local development phase, we separate the dependency download from the development mode.
# Dependency download
docker run -it \
-v $(pwd)/package.json:/opt/work/package.json \
-v $(pwd)/yarn.lock:/opt/work/yarn.lock \
-v $(pwd)/.yarnrc:/opt/work/.yarnrc \
Json, yarn.lock,.yarnrc to /opt/work/
-v mobile_node_modules:/opt/work/node_modules \
# /opt/work/node_modules Mount the mobile_node_modules data volume
--workdir /opt/work \
--rm node:13-alpine \
yarn
Copy the code
In dependency downloads, the idea is to treat the node_modules directory as a data volume. Mount it to the specified directory when needed, then simply mount the related files that affect dependencies to the container, and mount the node_modules data volume to the folder. This way the dependency files can be persisted.
# Development mode
docker run -it \
-v $(pwd)/:/opt/work/ \
/ opt/work/
-v mobile_node_modules:/opt/work/node_modules \
# mount node_modules to /opt/work/node_modules
--expose 8081 -p 8081:8081 \ # HotReload Socket
--expose 9229 -p 9229:9229 \ # debugger
--expose 3003 -p 3003:3003 \ # Node Server
Expose ports--workdir /opt/work \ node:13-alpine \./node_modules/. Bin /nodemon --inspect= 0.0.0.00:9229 --watch server server/bin/ WWWCopy the code
In development mode, we just need to mount the previous node_modules data volume to the node_modules directory, and then mount the project directory to the container. To begin development, expose the specified port. Here 8081 is the HotReload Socket interface, 3003 is the Node service interface, and 9229 is the debugger interface. Then set the start command to the development mode command can be normal development.
Once the development is complete, we push the code and trigger the CI.
CI
The above is our CI process.
In the CI phase, we generate a corresponding image for each commit record through Dockerfile. The advantage of this is that we can always find the corresponding mirror to roll back by submitting the record.
FROM node:13-alpine
COPY package.json /opt/dependencies/package.json
COPY yarn.lock /opt/dependencies/yarn.lock
COPY .yarnrc /opt/dependencies/.yarnrc
RUN cd /opt/dependencies \
&& yarn install --frozen-lockfile \
&& yarn cache clean \
&& mkdir /opt/work \
&& ln -s /opt/dependencies/node_modules /opt/work/node_modules
# Specific file processing
COPY ci/docker/docker-entrypoint.sh /usr/bin/docker-entrypoint.sh
COPY ./ /opt/work/
RUN cd /opt/work \
&& yarn build
WORKDIR /opt/work
EXPOSE 3003
ENV NODE_ENV production
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["node"."server/bin/www"]
Copy the code
Here is a Dockerfile we used.
- Use Node :13-alpine as the base image
- Copy dependency files to the container to download dependencies. Node_modules is connected to /opt/work. Clearing the installation cache
- Copy the project files into the container and execute the client code packaging command
- Set environment variables, expose service ports, and set the command for starting the image
docker build -f Dockerfile --tag frontend-mobile:COMMIT_SHA .
Copy the code
Finally, use the command above to package the version as an image and push it to a private repository.
Some tips we used to optimize compilation speed and image size for Dockerfile:
- Pre-merge invariant operations to separate download dependencies and compile into two RUN instructions, which can take advantage of Docker’s layer caching mechanism. In the case of invariant dependencies, skip the dependency download and use the previous cache directly.
- After each operation, clear unnecessary files, such as the global cache generated by YARN. These caches will not affect the running of our program. Many package management tools also generate caches that can be cleaned as needed.
- ‘.dockerIgnore ‘ignores files that do not affect the compiled results. Next time these files change, the packaging will use the previous image directly, and will not be repackaged when changing README or some K8s distribution configuration.
After the packaging is complete, we push the image to the private repository and trigger the CD.
CD
During the deployment phase, we used Kubernetes for container choreography. Quoting official introduction
K8s is an open source system for automated deployment, scaling, and management of containerized applications.
The K8s is very flexible and intelligent. We just need to describe what kind of application we need. K8s automatically places containers based on resource requirements and other constraints. Including some automatic horizontal extensions, self-healing. Allows us to track and monitor the health of each application.
The purpose of our use is simple: automatic operation and maintenance as well as non-invasive log collection and application monitoring.
Deployment represents an expected state. Describe the required application requirements.
The Service is responsible for providing a stable entry point externally to our application Service or set of Pods.
Ingress Routing. External requests are sent to the Ingress first. It distributes to the different services according to the rules already established.
Pod a process that runs in a cluster and is the smallest basic execution unit.
The CD container controls the K8s cluster through Kubectl. After the CD is triggered by each branch commit code, a separate Deployment is created for each branch. For each branch environment. A set of Pod services that correspond to a Deployment specified are exposed through a Service, and the Pod runs an image of the Deployment specified application. Finally, Ingress is used to provide services according to the domain name.
K8s configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-mobile # the name of the deployment
namespace: mobile # namespaces
labels:
app: frontend-mobile # label
spec:
selector:
matchLabels:
The existing set of copies of the Pod selected by the Pod tag will be affected by this deployment
app: frontend-mobile
replicas: 8 # Number of Pod nodes, default is 1
template: Configuration equivalent to Pod
metadata:
name: frontend-mobile # the name of the Pod
labels:
app: frontend-mobile # Pod labels
spec:
containers:
- name: frontend-mobile
image: nginx:latest
ports:
- containerPort: 3003
resources: Set resource limits
requests:
memory: "256Mi"
cpu: "250m" # 0.25 CPU
limits:
memory: "512Mi"
cpu: "500m" # 0.5 CPU
livenessProbe:
httpGet:
path: /api/serverCheck
port: 3003
httpHeaders:
- name: X-Kubernetes-Health
value: health
initialDelaySeconds: 15
timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
name: frontend-mobile # the name of the Service
namespace: mobile # namespaces
labels:
app: frontend-mobile # label
spec:
selector:
app: frontend-mobile # Corresponding Pod tag
ports:
- protocol: TCP
port: 8081 # service port
targetPort: 3003 # proxy port
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-mobile
namespace: mobile # namespaces
labels:
app: frontend-mobile # label
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: local-deploy.com
http:
paths:
- path: /
backend:
serviceName: frontend-mobile The name of the referenced service
servicePort: 8081 # The referenced Service port, corresponding to the port in the Service
Copy the code
Select a Deployment mode with a small resource quota and a large number of resources. The reason for setting a small resource quota for a single Pod is that THE SSR service is prone to memory leakage. If setting a small resource quota, the Pod can be restarted directly in case of memory leakage. Solve the problem before you find it and temporarily solve the service problem.
For other configurations, please refer to the official documentation.
Kubernetes. IO/docs/refere…
At this point, the deployment process is complete.
More work
Gitlab linkage Kubernetes
Log collection
AliNode access