[toc]

preface

A concise, pure list of Web front-end performance optimizations. Each optimization point contains concepts, practices, and reference materials. Interview and practice are suitable for each other.

This is a big project. Before you start, try to agree on the language and clarify the purpose and requirements of each section.

  • Concept: translate mandarin into human language that can be understood and remembered, in principle, legibility > professionalism

  • Practical operation: their own operation again, do not do cloud players; Record core implementation for CV convenience

  • Reference materials: Primary sources are selected to ensure the completeness, accuracy and timeliness of the information. Unless you can’t understand it at first hand…

First, network level

1. DNS preresolution

concept

Dns-prefetch is a DNS pre-resolution technology. It preresolves and performs DNS caching before requesting cross-domain resources to reduce the request delay caused by DNS resolution when the actual request is made. This works especially well for websites that contain many third-party connections.

In field

Add a link label whose ref attribute is dnS-prefetch. Usually in the HEAD of the HTML.

<link rel="dns-prefetch" href="//xxx.download.com">
Copy the code

The href value is the domain name to be preresolved, corresponding to the resource to be loaded or the domain name to which the user is likely to open the link.

note

Similarly, there is a TCP/IP preconnection, called preconnect. A full description is available in Resources.

The resources

  • MDN Web Docs

2. Apply the browser cache

concept

A browser cache is a backup of the results of browser requests stored on local disk or in memory. When the same request comes in, respond directly to the local backup instead of fetching it from the original server each time. In this way, the client response efficiency is improved and the server access pressure is relieved.

The rules for when and how to use the cache are called cache policies. It is divided into strong cache and negotiated cache.

The entire cache execution process is roughly as follows:

(1) When the request is initiated, the browser determines the local cache. If the request exists and has not expired, the strong cache is hit. The browser responds to the local backup with the status code 200. The size item in console Network displays disk cache.

② If there is no cache or the cache has expired, the original server is asked if the file has changed. According to the relevant fields in the request header, the server judges the freshness of the target file.

③ If the target file does not change, the negotiation cache is hit, the server sets a new expiration time, and the browser responds to the local backup with a status code of 304;

④ If the object file changes, the server responds to the new file with status code 200. Browser updates local backups.

There are several key points to this process

  • How do I determine whether the cache is expired?

    The browser reads the response header Expires and cache-Control in the cached request result, compared to the current time.

    Where Expires is an HTTP 1.0 field whose value is an absolute time.

    Expires: Tue, 18 Jan 2022 09:53:23 GMT
    Copy the code

    One drawback of comparing absolute time is that it depends on the computer clock being set correctly.

    To solve this problem, HTTP1.1 has added a cache-control field whose value is a relative time.

    Cache-control: max-age=60 // Unit: secondCopy the code
  • How do I determine whether a file has changed?

    First, you can compare the last modification time.

    Last-modified: Mon, 10 Jan 2022 09:06:14 GMT // Request header for new requests if-Modified-since: Mon, 10 Jan 2022 09:06:14 GMTCopy the code

    The browser retrieves the last-modified value from the cached result and sends it to the server via if-modified-since. Compare with the last modification time of the object file in the server.

    Again, Etag can be compared.

    An Etag entity tag is an arbitrary tag (reference string) attached to a document. They may contain document serial numbers or version names, checksums of document contents and other fingerprint information. When a publisher makes changes to a document, it changes the entity tag of the document to indicate that it is a new version.

    The ETag value of the response header is sent via if-none-match of the request header and compared with the ETag of the server target file.

    // Cache header ETag: "61dbf706-142" // request header if-none-match: "61dbf706-142"Copy the code

    As above, the new fields address some of the drawbacks of the previous solution:

    • Some documents may be periodically rewritten (for example, written from a background process), but the actual data contained is often the same. Although the content does not change, the modification date does.
    • Some documents may have been modified, but the changes are not significant enough to require a worldwide cache to reload the data (such as changes to spelling or comments).
    • Some servers cannot accurately determine the last modification date of their pages.
    • One-second change dates may not be sufficient for servers that provide documents that change in subsecond intervals (for example, live monitors).
  • What if both versions of a field exist at the same time?

    For browser compatibility reasons, both sets of fields are typically used together. They don’t have precedence. Take the union.

    When both conditions are met, the corresponding cache is hit.

In field

Caching is the core capability of Web servers and browsers. Mainstream Web service frameworks such as NGINx and KOA-static have built-in implementation of the above caching strategy. Out of the box, no additional programming or configuration is required.

Take Nginx for example. The configuration field for a strong cache is Expires, which accepts a number in seconds.

server {
	listen       8080;
	location / {
		root   /Users/zhp/demo/cache-koa/static;
		index index.html;
    Try_files will cause the cache configuration to not take effect
		# try_files $uri $uri/ /index.html;
		expires     60; }}Copy the code

It is good to configure really in actual work, but this reflects do not give what knowledge point. In order to deepen the impression, I use koA crude simulation, as a verification of the above knowledge points.

Below is a static resource service with minimal caching.

app.use(async (ctx) => {
  // 1. Read the specified file based on the access path
  const content = fs.readFileSync(`./static${ctx.path}`."utf-8");
  // 2. Set the response
	ctx.body = content;
});
Copy the code

In this case, no matter how many accesses are made, they are not cached.

Now add the Exprise and cache-Control fields required for strong caching to the response header

app.use(async (ctx) => {
  // 1. Read the specified file based on the access path
  const content = fs.readFileSync(`./static${ctx.path}`."utf-8");
  // 2. Set the cache
  ctx.response.set("Cache-Control"."max-age=60");
  ctx.response.set('Exprise'.new Date(new Date().getTime()+60*1000));
	// 3. Set the response
	ctx.body = content;
});
Copy the code

Check Network, the following two fields will be displayed in the response header, and the request within 60 seconds will be removed from the cache, as expected.

Expires: Tue, 18 Jan 2022 10:05:09 GMT
Cache-Control: max-age=60
Copy the code

note

I picked up the authoritative guide to HTTP with the intention of quoting first-hand authoritative sources, but the reading was really unsatisfactory. Beginners are advised to start with Graphical HTTP and be much friendlier.

The resources

  • The Definitive GUIDE to HTTP
  • HTTP caching mechanism
  • Nginx Chinese documentation

3. Static resource CDN

concept

The full name of CDN is Content Delivery Network. CDN is an intelligent virtual network built on the basis of the existing network. It relies on the edge servers deployed in various places, through the central platform of load balancing, content distribution, scheduling and other functional modules, users can get the content nearby, reduce network congestion, and improve user access response speed and hit ratio.

The core efficacy can be summed up in two points:

①. Select the best service node for user’s request by load balancing technology;

②. Improve user access response speed through content caching service.

In field

Average player: Select a CDN provider and look at its usage documentation. Proxy to your own static resource server by configuring the domain name and source site.

Advanced player: self-built CDN server, Balabal…

The resources

  • Ali Cloud CDN quick start

4. Open the Gzip

concept

Gzip is the abbreviation of GNUzip. It was first used for file compression in UNIX systems. Gzip encoding over THE HTTP protocol is a technique used to improve the performance of Web applications, and both web servers and clients (browsers) must support GZIP. The compression ratio of GZIP is 3 to 10 times, which greatly reduces the network bandwidth of the server.

In field

The actual operation process is divided into dynamic compression and static compression.

  • Dynamic compression. When a request is received, the server compresses and outputs a stream of data in real time. The server stores CSS/JS files. Nginx’s httpGzip module supports this functionality. The main configurations are as follows:

    Enable/disable the gzip module
    gzip             on; 
    Set the minimum number of bytes allowed for page compression. You are advised to set the number of bytes to a value larger than 1K. If the value is smaller than 1K, the pressure may increase.
    gzip_min_length  1024;
    The "text/ HTML "type is always compressed (whether specified or not).
    gzip_types       text/plain application/x-javascript text/css text/html application/xml;
    Copy the code
  • Static compression. The server starts with compressed files and responds to requests for compressed resources rather than receiving them.

    Using Webpack + Nginx implementation:

    ① Install and apply compression-webpack-plugin

       // install ##
       // Cannot read property 'tapPromise' of undefined
       npm i --save-dev compression-webpack-plugin@5.01.
       
       // ## webpack configuration ##
       // vue.config.js
       const CompressionPlugin = require("compression-webpack-plugin");
       
       module.exports = {
         configureWebpack: {plugins: [
             new CompressionPlugin()
           ]
         }
       }
    Copy the code

    ② Execute NPM run build

    Once the package is complete, there will be an extra.gz zip file in the dist directory

    ③ Enable gzip_static for Nginx configuration

    http{
        gzip_static on;
        server {
            listen       8082;
            location / {
                root/Users/zhp/demo/demo-externals/dist; }}}Copy the code
  • results

    Content-encoding: gzip is displayed in the Response Header, indicating that the server configuration takes effect.

    The Size column of Network shows that the data is smaller than the source file on the server, indicating that the browser supports Gzip.

note

Gzip_static has a higher priority than gzip. When both gzip and gzip_static are enabled, nginx matches.gz files first before dynamic compression.

References:

  • Do you really know gzip?
  • www.npmjs.com/package/com…

5. Use an advanced VERSION of HTTP

concept

From 1.0 to 1.1 to now 2.0, the HTTP protocol has become faster and stronger in successive iterations.

The changes are numerous and hardcore, but to explain the advantages of higher releases, here are just a few: persistent connection and pipelining techniques for HTTP/1.1, multiplexing and header compression for 2.0.

  • A persistent connection

    In the original version of the HTTP protocol, TCP connections were disconnected for every HTTP communication. To reduce the overhead caused by the repeated establishment and disconnection of TCP connections. HTTP/1.1 and some HTTP/1.0 came up with HTTP Persistent Connections (also known as HTTP keep-alive or HTTP Connection reuse). The characteristic of a persistent connection is that the TCP connection remains as long as neither end explicitly disconnects.

  • pipelining

    Previously, a request was sent and a response was received before the next request could be sent. Pipelining allows clients to send multiple requests simultaneously in parallel without having to wait for responses one after another.

  • multiplexing

    HTTP/1.1, even if multiple requests are piped simultaneously, the server responds in the order requested. A client will block subsequent requests (queuing) until it receives a response for all previous requests, which is called “head-of-line blocking.”

    HTTP/2 introduces the concept of binary data frames, which identify data sequentially. After the browser receives the data, the data can be combined according to the sequence, so that the server can transfer data in parallel.

    In order to solve the problem of sequence, we can continuously send messages to each other on a TCP connection. Each message is regarded as a frame, and each frame has a stream identifier field to indicate which “stream” the frame belongs to. Then, when the other party receives the message, The stream identifier concatenates all frames of each “stream” to form a single block of data.

    We treat each HTTP/1.x request as a “stream”, so the request is converted into multiple streams, the request response data is cut into multiple frames, and the frames from different streams are sent to each other interleaved. This is the multiplexing in HTTP/2.

    The end result is that no matter how many files or requests are accessed from the same domain, only one connection is required.

  • The first compression

    In HTTP/1.x there is no header compression. Gzip only compresses the body. HTTP/2 provides header compression. Polling request headers in general, and cookies in particular, take up a lot of space. Header compression makes the entire HTTP packet much smaller and therefore much faster.

In field

Mainstream Web servers such as Nginx and Tomcat support HTTP/2. For details, please refer to the official documents (I can’t handle 😂).

note

Mark Shows how to view the protocol version information. Right-click the table header in Network and select Protocol in the popover.

The resources

  • The Definitive GUIDE to HTTP
  • Web Performance optimization with HTTP/2
  • Analysis of HTTP/2 multiplexing

Second, code level

1. Optimize DOM operations

concept

It is well known that browser rendering is extremely expensive. By incorporating DOM operations, frequent rearrangement redraws can be avoided to improve rendering efficiency.

One of the best practices for optimizing DOM manipulation is the well-known virtual DOM.

Virtual DOM, a common JS object to describe the DOM structure, because it is not a real DOM, so it is called the virtual DOM

Its value lies in:

It is cheaper to look up the properties of JS objects than to look up the DOM tree.

② When data-driven DOM operations are frequently triggered, all changes are reflected in this JS object first. Finally, all changes are executed in a single macro task (EventLoop) to combine DOM operations.

③. By comparing the old and new virtual DOM (Diff algorithm), the DOM change scope can be minimized to the greatest extent.

In field

Both Vue and React introduce the concept of the virtual DOM, which, coupled with data-driven features, eliminates the need to focus on specific DOM operations after using the framework.

note

This section is supposed to be a bunch of examples of native DOM manipulation, but with Vue and React ruling the roost, it would seem a bit out of date.

The resources

  • V3.cn.vuejs.org/guide/optim…

2. Event delegation

concept

In simple terms, when we bind events, we do not bind directly to the target element, but to its parent/ancestor element.

This has two advantages: ①. The page listens to fewer events; (2) When a child node is added, there is no need to bind events.

In field

Take the requirement scenario “Li background grays when the mouse is placed over Li” as an example

  • Normal binding events:
<ul>
  <li>item1</li>
  <li>item2</li>
  <li>item3</li>
  <li>item4</li>
  <li>item5</li>
  <li>item6</li>
</ul>
<script>
	$("li").on("mouseover".function () {$(this)
      .css("background-color"."#ddd")
      .siblings()
      .css("background-color"."white");
  });
</script> 
Copy the code
  • Using event delegate:
  $("ul").on("mouseover".function (e) {
    $(e.target)
      .css("background-color"."#ddd")
      .siblings()
      .css("background-color"."white");
  });
Copy the code

3. Anti-shake and throttling

concept

The purpose of both anti-shake and throttling is to optimize the performance of a large number of events triggered per unit time. They just have different effects, different scenarios.

  • Image stabilization. Multiple consecutive triggers per unit of time, and only the last one is executed. The core principle is to delay execution and reset the timer whenever a new trigger occurs.

        function debounce(fn) {
          // create a flag to store the return value of the timer
          let timeout = null;
          return function() {
            // 2, every time the user clicks/input, the previous timer is cleared
            clearTimeout(timeout);
            // create a new setTimeout.
            // This ensures that the fn function will not be executed if the user clicks the button within the interval after the button is clicked
            timeout = setTimeout(() = > {
              fn.call(this.arguments);
            }, 1000);
          };
        }
    Copy the code

    Classic application scenario: Real-time search in the search box. Interface query is performed after users do not enter any more information.

  • The throttle. Events are triggered only once per unit of time. The core principle is to lock, only to meet a certain interval of time to execute.

    	function throttle(fn) {
          // 1. Save a tag through a closure
          let canRun = true;
          return function(. args) {
            // check whether the flag is true at the beginning of the function. If it is not true, break the function
            if(! canRun) {return;
            }
            // 3. Set canRun to false to prevent it from being executed before it is executed
            canRun = false;
            // 4
            setTimeout( () = > {
              fn.call(this, args); // If immediate execution is required, move the line change to the outer layer of the timer
              After executing the event (such as calling the interface), reset the flag to true
              canRun = true;
            }, 1000);
          };
        }
    Copy the code

    Classic application scenarios: scenarios triggered by high frequency such as rolling events; Button to prevent repeated clicking

In field

In practice, we can use the above functions directly, or we can refer to a third party library such as LoDash.

  1. The installation

    npm i lodash.debounce
    Copy the code
  2. use

    // xxx.vue
    <template>
      <div>
        <input type="text" @input="onInput">
      </div>
    </template>
    
    <scrit>
    import debounce from "lodash.debounce"
    export default{
      methods:{
        onInput:debounce((event)=>{
          console.log(event)
        },1000)
      }
    }
    </script
    Copy the code

4. Lazy loading of images

concept

Lazy loading for pictures load time of a kind of optimization, in some quantity larger websites (such as e-commerce sites page, or group-buying websites, games, home page, etc.), if we try when the user opens a page, put all of the images to load resources, that is likely to cause the phenomenon such as white, caton.

Lazy loading means that the browser loads only images in the viewable area, leaving a lot of images outside the viewable area unloaded and loading them later when the page scrolls down. While avoiding resource waste, the page loads more smoothly.

In field

The idea is simply to leave the img SRC empty and wait until the image is in the view area to set SRC and load the corresponding resource. But we don’t have to actually implement that, that’s reinventing the wheel. There are many mature plug-ins/components for this class, such as Vant’s lazy loading

// main.js
import { Lazyload } from 'vant';
app.use(Lazyload);

// xxx.vue
<img v-for="img in imageList" v-lazy="img" />
Copy the code

note

Picture is just the carrier, lazy load implementation is on demand. Paging queries, lazy route loading, and asynchronous module loading are all common optimizations of this type.

The resources

  • How to realize lazy loading of pictures

Third, construction level

There are a lot of building tools out there, but here we’ll focus on Webpack.

Weak water three thousand only take one ladle

1. Lazy route loading

Concept:

JavaScript packages can become very large when packaged to build applications, affecting page loads. It would be more efficient if we could split the components of different routes into different code blocks and then load the components when the routes are accessed.

In field

Here is an official example of route lazy loading from VueRouter

/ / will be
// import UserDetails from './views/UserDetails'
/ / replace
const UserDetails = () = > import('./views/UserDetails')

const router = createRouter({
// ...
routes: [{ path: '/users/:id'.component: UserDetails }],
})

Copy the code

There are two core implementations:

①. The dynamic import method import() of ES6 is used to load modules asynchronously;

(2) Packaging tools that automatically identify and package individual code blocks at build time.

We can also specify the name of the code block and build multiple routing sources into the same block via inline comments /* webpackChunkName: “about” */ (Webpack syntax).

// router.js { path: '/about', name: 'About', // route level code-splitting // this generates a separate chunk (about.[hash].js) for this route // which is lazy-loaded when the route is visited. component: () => import(/* webpackChunkName: "about" */ '.. /views/About.vue') }Copy the code

The resources

  • Router.vuejs.org/zh/guide/ad…

2. Externals excludes dependencies

concept

The externals configuration item of Webpack allows us to exclude specified dependencies from the output bundle. Excluded dependencies do not participate in the build.

It is used in the scenario where a large number of third parties rely on CDN.

In field

Take CDN VUE in vuE-CLI project as an example

  1. Start by adding a script reference to public/index.html

    // public/index.html
    <! DOCTYPEhtml>
    <html lang="">
      <head>.<script src="https://lib.baomitu.com/vue/2.6.11/vue.min.js"></script>
      </head>
      <body>.</body>
    </html>
    Copy the code
  2. Use the webpack configuration item externals to exclude vUE dependencies

    // vue.config.js
    const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;
    
    module.exports = {
      configureWebpack: {plugins: [
          new BundleAnalyzerPlugin() // To output the package analysis report NPM run build --report as shown below].externals: {
          vue: 'Vue',}}}Copy the code
  3. Verify the results using the BundleAnalyzerPlugin (packaged analysis plug-in)

    The following two graphs are the analysis results of NPM Run build –report before and after the change.

    As you can see from the comparison, the overall package size is reduced by over 200 Kb, and vUE dependencies are removed from chunk-vendors. As expected.

The resources

  • Webpack Chinese document
  • www.npmjs.com/package/web…

3. TreeShaking is introduced on demand

concept

TreeShaking is a term commonly used to describe removing dead-code from a JavaScript context.

The concept is already there, but the implementation will come after ES6. Static analysis is made possible by the compile-time loading of the ES6 Module Module.

In field

The official release of Webpack 4 extends this capability. In vue-CLI created projects we don’t need any extra configuration to make it work.

// assets/util
const funcA =() = >{
  console.log("this is funcA")}const funcB =() = >{
  console.log("this is funcB")}export {
  funcA,
  funcB
}

// app.vue
import { funcA } from "./assets/util";
export default {
  created(){ funcA(); }};Copy the code

To execute NPM run build, open dist/app.xxx.js. You can see that only funcA has no B. As expected.

But when Improt was a third-party plugin, it didn’t actually work. Such as lodash

import debounce from 'lodash/debounce'; / / 3.35 KB
import { debounce } from 'lodash'; / / 72.48 KB
Copy the code

For it to be effective, a number of conditions must be met:

  • Using ES2015 module syntax (i.eimportexport).
  • Make sure no compiler converts your ES2015 module syntax to CommonJS (incidentally, this is the default behavior of the now common @babel/preset-env, see the documentation for details).
  • In the projectpackage.jsonFile, add"sideEffects"Properties.
  • usemode"production"Configuration items to enableMore optimizations, including zip code and Tree shaking.

Note that just because TreeShaking is not supported doesn’t mean it can’t be introduced on demand. Before TreeShaking, plugins such as babel-plugin-import were everywhere.

import { Button } from 'antd';
ReactDOM.render(<Button>xxxx</Button>); ↓ ↓ ↓ ↓ ↓var _button = require('antd/lib/button');
ReactDOM.render(<_button>xxxx</_button>);
Copy the code

The resources

  • Webpack.docschina.org/guides/tree…
  • Github.com/umijs/babel…

Four, master advanced

Here list some small crowd partial door, the threshold is high, the pattern is big optimization means.

1. SSR

concept

SSR is the abbreviation of Server Side Render, and its counterpart is Client Side Render.

  • Server-side rendering: Complete page interpolation/data assembly on the server side and return the page containing the data directly.
  • Client-side rendering: The client requests page static resources and interface data, respectively, and then manipulates DOM assignment to the page.

In fact, at the beginning of the Web world, there was only one way to render on the server. .net and JSP were in their heyday, and there was only one kind of programmer. Until the advent of Ajax technology, which allowed people to retrieve data without refreshing the page, the door to client-side rendering opened and went out of control. The front and back end separation, the popularity of single page applications, but also step by step to the client rendering territory to the extreme.

Today, SSR generally only exists in the first screen time requirements, static content and need SEO scenarios.

In field

Traditional server-side rendering can be done using a back-end templating system or a string templating engine. Here, SPA SSR, which is more complex and difficult, is selected as an example.

Here is the official Vue SSR example, just replacing Express with koA, which I’m more familiar with.

// ssr.js
import Koa from "koa";
import { createSSRApp } from "vue";
import { renderToString } from "vue/server-renderer";

const app = new Koa();

app.use(async (ctx) => {
  const vueApp = createSSRApp({
    data: () = > ({ count: 1 }),
    template: `<button @click="count++">{{ count }}</button>`});const html = await renderToString(vueApp);
  const result = ` <! DOCTYPE html> <html> <head> <title>Vue SSR Example</title> </head> <body> <div id="app">${html}</div>
    </body>
  </html>
  `;
  ctx.body = result;
});

app.listen(3000.() = > {
  console.log("starting at port 3000");
});
Copy the code

The logic is surprisingly simple. ① Create a single-page application. ②. Vue instance to string; ③. Concatenate HTML and respond.

Nodess.js, then go to localhost:3000 and we should see the 1 button on the page as expected.

But that’s not the end of it. When you click the button, the number doesn’t change. Vue also takes a first step of hydration, or client activation.

Client-side activation refers to the process by which the Vue takes over static HTML sent by the server at the browser side and turns it into a dynamic DOM managed by the Vue.

In plain English, the client still needs to instantiate the Vue app to load the non-static code that runs DOM events.

First step: Add client.js The content is the same as in step 1 “Create a single page application” above.

// client.js
import { createSSRApp } from "vue";

const vueApp = createSSRApp({
  data: () = > ({ count: 1 }),
  template: `<button @click="count++">{{ count }}</button>`}); vueApp.mount('#app');
Copy the code

First step: Mount to HTML Introduce client.js in the HTML head.

<script type="module" src="/client.js"></script>
Copy the code

Since js files are introduced, we also need to start a static resource service to ensure that the page is loaded into client.js.

The complete code is as follows:

import Koa from "koa";
import koaStatic from "koa-static";
import { createSSRApp } from "vue";
import { renderToString } from "vue/server-renderer";

const app = new Koa();

// Static resource middleware to ensure that it can be loaded into client.js
app.use(koaStatic("."));

app.use(async (ctx) => {
  const vueApp = createSSRApp({
    data: () = > ({ count: 1 }),
    template: `<button @click="count++">{{ count }}</button>`});const html = await renderToString(vueApp);
  const result = ` <! DOCTYPE html> <html> <head> <title>Vue SSR Example</title> <script type="importmap"> { "imports": { "vue": "https://unpkg.com/vue@3/dist/vue.esm-browser.js" } } </script> <script type="module" src="/client.js"></script> </head>  <body> <div id="app">${html}</div>
    </body>
  </html>
  `;
  ctx.body = result;
});

app.listen(3000.() = > {
  console.log("starting at port 3000");
});
Copy the code

Restart the service, refresh the page, and the numbers work.

Examples are examples, just the tip of the iceberg, just the basics. In practical application, there are still a series of problems, such as prefetching data, solving state pollution and collaborative construction. In short, building a server-side rendered application from scratch is quite complex. Fortunately, there are proven SSR solutions in the community, such as nuxt.js, which is officially recommended by Vue.

note

Log an error. If SyntaxError: Cannot use import statement outside a module is reported, upgrade node and set “type”: “module” in package.json.

You need to tell Node.js to treat JavaScript code as an ECMAScript module through the.mjs file extension, the package.json “type” field, or the –input-type flag.

The resources

  • Vuejs.org/guide/scali…
  • Understand Vue SSR principle and build project framework
  • Vue SSR Guide (Vue. Js Server-side Rendering Guide)

2. Web Workers

concept

Web workers are JavaScript that runs in the background and does not affect page performance.

The idea is to open child threads

The Worker interface generates true operating system-level threads that can perform tasks without blocking the UI thread.

Typically used to handle CPU-intensive tasks such as intensive computing.

In field

Make bricks without rice. In my experience, I have never met a scene that requires a Worker. I just want to say the only two information I can think of: ①. Some plug-ins such as PSFJS have this application because its build results include xxx.worker.js; Node has a thread-specific API (child_process), which is widely used in construction scenarios.

note

Perhaps more important for me is to remind myself to open the box and not to think in a fixed way.

JS is single-threaded, but browsers are not. We are Jser, but more Coder, and should not, and cannot, limit our vision and cognition to a single thread or a specific territory.

The resources

  • Developer.mozilla.org/zh-CN/docs/…

3. Build a performance optimization system

concept

Self-checklists are just a reminder of the points we should or need to pay attention to during development. If you encounter specific optimization tasks, there will certainly be numerical, verifiable comparison and other aspects of the requirements. At this time, we need to temporarily put aside the fragmentation of optimization means, stand at a higher level, to build a target, verifiable, systematic performance optimization system.

It generally includes: index selection, collection and reporting, status analysis, optimization scheme, testing scheme, performance evaluation and early warning.

In field

lifetime

Write at the end

This article intermittently wrote more than a month, once dystocia. While consuming my writing passion, I almost broke my good habit of summarizing regularly.

Many reasons are analyzed, from thinking patterns to word use, from human drive to personality analysis. Do a reflection summary here, take warning.

  1. Wrong goals, too much wasted effort. One devoted to explaining why 1+1=2, struggling to explain in plain English what everyone knows;
  2. I’ve drilled into the minutiae. I often struggle with the choice of a word and a sentence, such as whether to use “apply” or “use”, whether to use a comma or a period;
  3. Lack of accumulation of words and sentences. When you see things, you only take its meaning by moving ten lines in a glance, and then you express yourself, racking your brains.
  4. The disorder of logical thinking. Always jump to think of a lot of points, and then to generalize classification. Instead of “pyramid” type of hierarchical thinking, orderly expression.

[wry smile] really have no talent to write things, but who let oneself like it. I agree with teacher Ruan Yifeng’s point of view:

Many people suggest that when looking for direction in life, you should listen to your heart and find what you are truly passionate about. I now think the more realistic advice would be to find the pain you’re willing to endure. In which direction are you willing to willingly, endure hardship for years and years, and have the greatest patience, “though nine deaths, especially without regret”, that is the direction you should choose.

Well, I’m glad that programming and writing became my direction.