Recently, I have arranged the high frequency test questions on the front face and shared them with you to learn. If you have any questions, please correct me!

Note: 2021.5.21 has been updated to modify some errors in this article. Add mind map, according to my own interview experience and the interview experience on platforms such as Niuke.com, generally distinguish the frequency of examination of interview questions, so as to make targeted review.

The following is a series of articles.

The following is a series of articles.

[1] “2021” high frequency front face test summary HTML article

[2] “2021” high-frequency front end of the CSS section

[3] 2021

[4] 2021

[5] “2021”

[6] “2021”

【7】 “2021” High-Frequency front end test questions summary

[8] “2021” high frequency front end test questions summary

[9] “2021” high frequency front end test question summary of computer network article

[10] “2021” high frequency front end test summary of browser principles

[11] the performance optimization of “2021” high frequency test paper summary

[12] “2021” high frequency front end of the handwritten code of the test summary

[13] “2021” high frequency front end test summary code output results

A, the CDN

1. Concept of CDN

Content Delivery Network (CDN) refers to a Network of computers interconnected through the Internet that delivers music, pictures, videos, applications, and other files to users faster and more reliably using the servers closest to each user. To provide high performance, scalability and low cost network content delivery to users.

A typical CDN system consists of the following three parts:

  • Distribution service system: The most basic unit of work is the Cache device. The Cache (edge Cache) is responsible for responding directly to the end user’s access request and providing the locally cached content to the user quickly. Cache is also responsible for content synchronization with the source site, the updated content and locally unavailable content from the source site and stored locally. The number, scale, and total service capability of Cache devices are the most basic indicators to measure the service capability of a CDN.
  • Load balancing system: The load balancing system is responsible for scheduling access for all users who initiate service requests and determining the final actual access address provided to users. The two-level scheduling system is divided into global load balancing (GSLB) and local load balancing (SLB). The global load balancer (GLB) determines the physical location of the cache that provides services for users by determining the optimal location of each service node based on the proximity principle. The local load balancing function is used to balance device loads within nodes
  • Operation management system: The operation management system is divided into operation management and network management subsystems, which are responsible for the collection, sorting and delivery necessary for the interaction between business and external systems, including customer management, product management, billing management, statistical analysis and other functions.

2. The role of CDN

CDN is generally used to host Web resources (including text, images, scripts, etc.), downloadable resources (media files, software, documents, etc.), and applications (portal sites, etc.). Use CDN to speed up access to these resources.

(1) In terms of performance, the role of introducing CDN lies in:

  • Users receive content from the nearest data center, with lower latency and faster content loading
  • Some resource requests are allocated to the CDN, reducing the load on the server

(2) In terms of security, CDN can help defend against DDoS, MITM and other network attacks:

  • For DDoS: Monitors and analyzes abnormal traffic to limit the request frequency
  • For MITM: The source server, CDN node, and Internet Service Provider (ISP) communicate over HTTPS over the full link

In addition, as a basic cloud service, CDN also has the advantages of resource hosting and on-demand expansion (able to cope with traffic peak).

3. CDN principle

CDN and DNS are closely related. Let’s take a look at the DNS domain name resolution process. Enter www.test.com in the browser and the resolution process is as follows: (1) Check the browser cache (2) check the operating system cache, such as the hosts file) (3) Check the router cache (4) If the previous steps are not found, query the ISP’s LDNS server (5) If the LDNS server is not found, The system requests the Root Server for resolution. The steps are as follows:

  • The root server returns a top-level domain name (TLD) server such as.com..cn..org, which is returned in this example.comThe address of the
  • The request is then sent to the top-level DOMAIN name server, which returns the address of the secondary domain name (SLD) server, as in this example.testThe address of the
  • The request is then sent to the secondary DNS server, which returns the destination IP that was queried by the domain name, as in this examplewww.test.comThe address of the
  • The Local DNS Server caches the result and returns it to the user. The result is cached in the system

The working principle of CDN is as follows: (1) The process of users not using CDN to cache resources:

  1. The browser uses the DNS to resolve the domain name and obtains the IP address corresponding to the domain name in sequence
  2. Based on the obtained IP address, the browser sends a data request to the service host of the domain name
  3. The server returns response data to the browser

(2) The process of using CDN to cache resources:

  1. After the local DNS system parses the URL of the clicked data, it finds that the URL corresponds to a DNS server dedicated to the CDN. The DNS system assigns the domain name resolution authority to the DNS server that the CNAME points to.
  2. The CND DNS server returns the IP address of the CND global load balancer to the user
  3. The user sends a data request to the GLB on the CDN
  4. The GLOBAL LOAD balancer in the CDN selects a region LOAD balancer to which the user belongs based on the USER’S IP address and the URL of the user’s request, and tells the user to send a request to this device
  5. The regional LOAD balancing device selects an appropriate cache server to provide services and returns the IP address of the cache server to the global load balancing device
  6. The GLB device returns the IP address of the server to the user
  7. The user initiates a request to the cache server, and the cache server responds to the request and sends the content required by the user to the user terminal.

If the cache server does not have the content the user wants, the cache server requests the content from its next-level cache server, and so on until the required resource is obtained. In the end, if it doesn’t, it goes back to its own server to get the resource.

CNAME (alias) : Indicates the IP address or CNAME of a domain name that is resolved during domain name resolution, and then searches for the corresponding IP address based on the CNAME.

4. Application scenarios of CDN

  • ** Use third-party CDN services: ** If you want to open source some projects, you can use third-party CDN services
  • ** Use CDN to cache static resources: ** put their own static resources on CDN, such as JS, CSS, pictures, etc. You can place the entire project on the CDN for one-click deployment.
  • ** Live transmission: ** Live transmission is essentially using streaming media for transmission, and CDN also supports streaming media transmission, so live broadcasting can completely use CDN to improve access speed. CDN in dealing with streaming media are different from normal static files, common file if the edge node is not found, will go on one layer and then to find, but streaming media itself very large amount of data, if you use the way back to the source, will inevitably bring performance problems, so the streaming media commonly used is to actively push way.

2. Lazy loading

1. Concept of lazy loading

Lazy loading, also known as lazy loading or on-demand loading, refers to the lazy loading of image data in long pages, and is a good way to optimize web page performance. On a long web page or application, if there are a lot of images, all of them will be loaded, and the user will only see the image data in that part of the visual window, wasting performance.

You can solve this problem by using lazy loading of images. Images outside the visual area do not load before scrolling, but only while scrolling. This makes web pages load faster and reduces server load. Lazy loading is good for scenarios with lots of images and a long list of pages (long list).

2. Lazy loading

  • Reduce the load of unwanted resources: Using lazy loading significantly reduces the strain and traffic on the server, and also reduces the load on the browser.
  • Improve user experience: If you load a lot of images at the same time, you may need to wait a long time, which affects the user experience. Using lazy loading can greatly improve the user experience.
  • Prevent loading too many images and affect the loading of other resource files: it will affect the normal use of website applications.

3. Lazy loading principle

Image loading is caused by SRC, and when SRC is assigned, the browser requests the image resource. According to this principle, we use the DATA-xxx attribute of HTML5 to store the image path. When the image needs to be loaded, we assign the image path in data-xxx to SRC, so as to realize the on-demand loading of the image, namely lazy loading.

Note: the XXX in data-xxx can be customized. Here we use data-src.

Lazy loading focuses on determining which image the user needs to load. In the browser, the resources in the visual area are the resources the user needs. So when the picture appears in the visible area, get the real address of the picture and assign the value to the picture.

Implementing lazy loading using native JavaScript:

Knowledge:

(1) window.innerHeight is the height of the browser viewable area

(2) document. Body. ScrollTop | | document. The documentElement. The scrollTop of the browser to scroll

(3) imgs.offsetTop is the height from the top of the element to the top of the document (including the distance of the scroll bar)

(4) image loading condition: img.offsetTop < window.innerheight + document.body.scrollTop;

Here is: Code implementation:

<div class="container"> <img src="loading.gif" data-src="pic.png"> <img src="loading.gif" data-src="pic.png"> <img src="loading.gif" data-src="pic.png"> <img src="loading.gif" data-src="pic.png"> <img src="loading.gif" data-src="pic.png"> <img src="loading.gif" data-src="pic.png"> </div> <script> var imgs = document.querySelectorAll('img'); function lozyLoad(){ var scrollTop = document.body.scrollTop || document.documentElement.scrollTop; var winHeight= window.innerHeight; for(var i=0; i < imgs.length; i++){ if(imgs[i].offsetTop < scrollTop + winHeight ){ imgs[i].src = imgs[i].getAttribute('data-src'); } } } window.onscroll = lozyLoad(); </script>Copy the code

4. The difference between lazy loading and preloading

Both of these methods are ways to improve the performance of web pages. The main difference between them is that one loads early and the other loads late or not at all. Lazy loading relieves the pressure on the server front end, while preloading increases the pressure on the server front end.

  • Lazy loading, also known as lazy loading, is the practice of delaying the loading of images on long web pages until they are accessed by the user. This improves the first screen loading of the site, improves the user experience, and reduces server stress. It works well on e-commerce sites with lots of images and long pages. Lazy loading is the implementation of the principle, the pictures on the page of the SRC attribute is set to an empty string, the image of the real path stored in a custom attribute, when the page scroll, judge, if the image into the page visual area, the real path out of the custom attributes assigned to the SRC attribute of the images, In order to achieve the image of the lazy loading.
  • Preloading refers to loading the required resources locally in advance so that they can be retrieved directly from the cache later when needed. Preloading can reduce user waiting time and improve user experience. The most common way I know of preloading images is to use the image object in JS, by setting the SCR property for the image object, to achieve the preloading of images.

Three, reflux and redraw

1. Concepts of reflux and redrawing and triggering conditions

(1) reflux

When the size, structure, or attributes of some or all of the elements in the rendered tree change, the browser rerenders part or all of the document, a process known as reflux.

The following operations can cause reflux:

  • The first rendering of the page
  • The browser window size changes
  • The content of the element changes
  • The size or position of an element changes
  • The font size of the element changes
  • Activate CSS pseudo-classes
  • Query certain properties or call certain methods
  • Add or remove visible DOM elements

Since browsers render pages based on a streaming layout, triggering backflow causes the surrounding DOM elements to rearrange in one of two ways:

  • Global scope: Rearranges the entire render tree, starting from the root node
  • Local scope: Rearranges parts of a rendered tree or a rendered object

(2) Redraw

A browser redraws an element when its style changes on a page that does not affect its position in the document flow. This process is called redrawing.

The following operations can cause reflux:

  • Color, background related properties: background-color, background-image, etc
  • Outline related properties: outline-color, outline-width, text-decoration
  • Border-radius, visibility, and box-shadow

Note: When backflow is triggered, redraw is always triggered, but redraw does not necessarily trigger backflow.

2. How to avoid reflux and redrawing?

Measures to reduce reflux and redrawing:

  • When operating the DOM, try to operate on lower-level DOM nodes
  • Don’t usetableLayout, a small change may make the wholetableTo rearrange
  • Expressions that use CSS
  • Do not frequently manipulate the style of the element. For static pages, you can change the class name, not the style.
  • Use Absolute or fixed to remove elements from the document flow so that changes to them do not affect other elements
  • Instead of manipulating the DOM frequently, you can create a document fragmentdocumentFragment, apply all the DOM operations on it, and finally add it to the document
  • Set the element firstdisplay: noneAnd then display it after the operation is over. Because DOM operations on elements with the display attribute none do not cause reflow and redraw.
  • Multiple read (or write) operations of the DOM are grouped together, rather than read and write operations interspersed with writes. This is thanks to the browser’s render queue mechanism.

Browser for page reflux and redraw, for its own optimization – render queue

The browser puts all the backflow and redraw operations in a queue, and when the queue reaches a certain number of operations or a certain time interval, the browser batches the queue. This will allow multiple backflow, redraw into a backflow redraw.

Above, putting multiple reads (or writes) together will wait until all the reads are queued, so that instead of triggering multiple backflows, only one is triggered.

3. How to optimize animation?

As for how to optimize the animation, we know that in general, the animation needs to operate the DOM frequently, which will lead to performance problems of the page. We can set the position property of the animation to Absolute or fixed, and remove the animation from the document flow, so that its backflow will not affect the page.

4. What is a documentFragment? What’s the difference between using it and directly manipulating the DOM?

DocumentFragment (MDN)

DocumentFragment interface a minimal document object with no parent objects. It is used as a lightweight version of Document that stores a Document structure made up of nodes, just like a standard Document. The biggest difference between DocumentFragment and document is that DocumentFragment is not part of the real DOM tree, and its changes do not trigger a re-rendering of the DOM tree and cause no performance problems.

When we insert a DocumentFragment node into the document tree, we do not insert the DocumentFragment itself, but all of its descendants. During frequent DOM manipulation, we can insert DOM elements into the DocumentFragment and then insert all the descendants into the document at once. Inserting a DocumentFragment node into a DOM tree does not trigger a redraw of the page, which greatly improves the performance of the page compared to manipulating the DOM directly.

Four, throttle and shake

1. Understanding of throttling and damping

  • Function anti-shaking means that the callback is executed n seconds after the event is fired, and if the event is fired again within n seconds, the time is re-timed. This can be used in some click-request events to avoid sending multiple requests to the back end because of multiple clicks by the user.
  • Function throttling refers to a specified unit of time, in this unit time, the callback function can only be fired once, if the same unit of time in the same event is fired multiple times, only one can take effect. Throttling can be used on the event listener of the Scroll function to reduce the frequency of event calls.

Application scenarios of the anti-shaking function:

  • Button submit scenario: Prevent multiple button submissions and execute only the last one
  • Server authentication scenario: Form authentication requires server cooperation, only the last time to execute a sequence of input events, and search for associative word function

Application scenarios of the throttling function:

  • Drag and drop scenario: Only once in a fixed time to prevent uHF trigger position changes
  • Zoom scenario: Monitor the browser resize
  • Animated scenes: Avoid triggering animations multiple times in a short period of time to cause performance problems

2. Realize throttling function and anti-shaking function

Function anti – shaking implementation:

function debounce(fn, wait) {
  var timer = null;

  return function() {
    var context = this,
      args = [...arguments];

    // If there is a timer, cancel the previous timer and start again
    if (timer) {
      clearTimeout(timer);
      timer = null;
    }

    // Set the timer so that the event interval specified after the event execution
    timer = setTimeout(() = > {
      fn.apply(context, args);
    }, wait);
  };
}
Copy the code

Function throttling implementation:

// Time stamp version
function throttle(fn, delay) {
  var preTime = Date.now();

  return function() {
    var context = this,
      args = [...arguments],
      nowTime = Date.now();

    // If the interval between two times exceeds the specified time, the function is executed.
    if (nowTime - preTime >= delay) {
      preTime = Date.now();
      returnfn.apply(context, args); }}; }// Timer version
function throttle (fun, wait){
  let timeout = null
  return function(){
    let context = this
    let args = [...arguments]
    if(! timeout){ timeout =setTimeout(() = > {
        fun.apply(context, args)
        timeout = null 
      }, wait)
    }
  }
}
Copy the code

Five, picture optimization

1. How to optimize the pictures in the project?

  1. No pictures. Many times will use a lot of decorated pictures, in fact, this kind of decorated pictures can be used to replace CSS.
  2. For mobile, the screen width is so small that there is no need to load the original image and waste bandwidth. General pictures are loaded with CDN, you can calculate the width of the adaptation screen, and then to request the corresponding cropped pictures.
  3. The miniatures are in Base64 format
  4. Combine multiple icon files into one image (Sprite Chart)
  5. Select the correct image format:
    • Try to use the WebP format for browsers that can display it. Because WebP format has a better image data compression algorithm, can bring a smaller image volume, and has the visual recognition of the same image quality, the disadvantage is not good compatibility
    • Small images use PNG, in fact, for most ICONS and other images, you can use SVG instead
    • Use JPEG for photos

2. Common picture formats and usage scenarios

(1) BMP, is lossless, support both index color and direct color lattice diagram. There is little compression of the data in this image format, so BMP images are usually larger files.

(2) GIF is a lossless, indexed color lattice diagram. LZW compression algorithm is used for coding. The small file size is an advantage of the GIF format, as well as support for animation and transparency. However, THE GIF format only supports 8bit indexed colors, so the GIF format is suitable for scenarios that have low color requirements and require small file size.

(3) JPEG is lossy, using direct color bitmap. JPEG images have the advantage of using direct colors. Thanks to the richer colors, JPEG is very good for storing photos. Compared to GIF, JPEG is not good for storing business logos, wireframe images. Because lossy compression will cause the image to be blurred, and direct color selection will cause the image file to be larger than GIF.

(4) PNG-8 is a lossless bitmap with indexed colors. PNG is a relatively new image format, and PnG-8 is a very good alternative to GIF. Whenever possible, use PNG-8 instead of GIF as it has a smaller file size for the same image. In addition, PnG-8 also supports transparency adjustment, which GIF does not. Unless animation support is required, there is no reason to use GIF instead of PNG-8.

(5) PNG-24 is a lossless bitmap using direct color. The advantage of PNG-24 is that it compressions the image data, making pNG-24 files much smaller than BMP files for the same effect. Of course, PNG24 is still much larger than JPEG, GIF, or PNG-8.

(6) SVG is a lossless vector graph. SVG is a vector graph, meaning that AN SVG image consists of lines and curves and the methods used to draw them. When you zoom in on an SVG image, you still see lines and curves, not pixels. This means that SVG images do not distort when enlarged, so it is ideal for drawing logos, ICONS, etc.

(7) WebP is a new image format developed by Google. WebP is a bitmap that supports both lossless and lossless compression and uses direct color. As you can see from the name, it was made for the Web. What is made for the Web? This means that WebP has a smaller file size for the same quality of images. There are a lot of images on the web right now, and if you can reduce the file size of each image, you can greatly reduce the amount of data transfer between the browser and the server, thereby reducing access latency and improving the access experience. Currently, only Chrome and Opera browsers support WebP, which is not very compatible.

  • In the case of lossless compression, WebP images of the same quality, the file size is 26% smaller than PNG;
  • In the case of lossy compression, the file size of WebP images with the same image accuracy is 25%~34% smaller than that of JPEG.
  • WebP image format supports image transparency. A lossless compressed WebP image requires only 22% extra file size to support transparency.

Webpack optimization

1. How to improvewebpackPacking speed?

(1) Optimize the Loader

For Loaders, the most important factor affecting packaging efficiency is Babel. Because Babel converts code into strings to generate aN AST, and then converts the AST into new code, the larger the project, the more code it converts, and the less efficient it is. Of course, this can be optimized.

First we optimized the file search scope of Loader

module.exports = {
  module: {
    rules: [{// Use Babel only for js files
        test: /\.js$/,
        loader: 'babel-loader'.// Look only in the SRC folder
        include: [resolve('src')].// Not to find the path
        exclude: /node_modules/}}}]Copy the code

In the case of Babel, you want to apply only to JS code, and then the code used in node_modules is all compiled, so there’s absolutely no need to do it again.

Of course, this is not enough. You can also cache the Babel compiled files and only compile the changed code files next time, which greatly speeds up the packaging time

loader: 'babel-loader? cacheDirectory=true'
Copy the code

(2) HappyPack

Since Node is single-threaded, Webpack is also single-threaded during the packaging process, especially when Loader is executing, and there are many long compilation tasks, which can lead to waiting situations.

HappyPack converts the synchronized execution of the Loader to parallel, thus making full use of system resources to speed up packaging efficiency

module: {
  loaders: [{test: /\.js$/,
      include: [resolve('src')].exclude: /node_modules/.// The content after the id corresponds to the following
      loader: 'happypack/loader? id=happybabel'}},plugins: [
  new HappyPack({
    id: 'happybabel'.loaders: ['babel-loader? cacheDirectory'].// Start four threads
    threads: 4})]Copy the code

(3) the DllPlugin

DllPlugin can pre-package and introduce specific class libraries. This approach greatly reduces the number of times the library needs to be repackaged, only when the library is updated, and it also optimizes the separation of common code into separate files. DllPlugin can be used as follows:

// In a separate file
// webpack.dll.conf.js
const path = require('path')
const webpack = require('webpack')
module.exports = {
  entry: {
    // Libraries that want to be packaged uniformly
    vendor: ['react']},output: {
    path: path.join(__dirname, 'dist'),
    filename: '[name].dll.js'.library: '[name]-[hash]'
  },
  plugins: [
    new webpack.DllPlugin({
      // Name must be the same as output.library
      name: '[name]-[hash]'.// This property needs to be consistent with the DllReferencePlugin
      context: __dirname,
      path: path.join(__dirname, 'dist'.'[name]-manifest.json')]}})Copy the code

You then need to execute this configuration file to generate the dependency files, which you then need to import into the project using the DllReferencePlugin

// webpack.conf.js
module.exports = {
  / /... Omit other configurations
  plugins: [
    new webpack.DllReferencePlugin({
      context: __dirname,
      // Manifest is the json file that was packaged before
      manifest: require('./dist/vendor-manifest.json'),}})]Copy the code

(4) Code compression

In Webpack3, UglifyJS is typically used to compress code, but this is run in a single thread. To increase efficiency, you can use the Webpack-parallle-Uglify-plugin to run UglifyJS in parallel.

In Webpack4, you don’t need to do any of this. You can enable this function by default by setting mode to Production. Code compression is also a must for performance optimization. Of course, we can compress not only JS code, but also HTML and CSS code. In the process of compressing JS code, we can also implement configuration functions such as deleting console.log code.

(5) Others

Small optimizations can be made to speed up packing

  • resolve.extensions: indicates a list of file suffixes. The default lookup order is['.js', '.json']If your import files do not have suffixes, you will find files in this order. We should reduce the length of the suffix list as much as possible, and then prioritize the most frequent suffixes
  • resolve.alias: Allows Webpack to find a path faster by mapping it to an alias
  • module.noParseIf you are sure that there are no other dependencies under a file, you can use this property to make Webpack not scan the file, which is useful for large libraries

2. How to reduce Webpack packaging size

(1) Load on demand

When developing a SPA project, there are many routing pages in the project. Packing all of these pages into a single JS file would combine multiple requests, but it would also load a lot of unnecessary code and take longer. In order to present the home page to the user more quickly, it is hoped that the home page can be loaded with as small a file as possible. At this time, we can use on-demand loading to package each routing page as a separate file. Of course, not only can routes be loaded on demand, but also for large libraries such as LoadAsh.

The implementation of the load on demand code will not be expanded here because the implementation is different depending on the framework used. Of course, their usage may be different, but the underlying mechanism is the same. When it is used, it downloads the corresponding file, returns a Promise, and executes a callback when the Promise is successful.

(2) Scope collieries

Scope colliers will analyze the dependencies between modules and try to merge the packaged modules into a function as much as possible.

For example, if you want to package two files:

// test.js
export const a = 1
// index.js
import { a } from './test.js'
Copy the code

In this case, the packaged code would look something like this:

[
  /* 0 */
  function (module.exports.require) {
    / /...
  },
  / * 1 * /
  function (module.exports.require) {
    / /...}]Copy the code

However, if you use Scope colliers, the code will be merged into one function as much as possible, so it will look like this:

[
  /* 0 */
  function (module.exports.require) {
    / /...}]Copy the code

This packaging approach generates significantly less code than the previous one. If you want to open this function in the Webpack4, only need to enable optimization. ConcatenateModules is ok:

module.exports = {
  optimization: {
    concatenateModules: true}}Copy the code

(3) Tree Shaking

Tree Shaking can remove unreferenced code from a project, such as:

// test.js
export const a = 1
export const b = 2
// index.js
import { a } from './test.js'
Copy the code

In this case, the variable b in the test file will not be packaged into the file if it is not used in the project.

With Webpack 4, this tuning feature is automatically enabled when the production environment is turned on.

3. How to usewebpackTo optimize front-end performance?

Optimizing front end performance with WebPack means optimizing the output of WebPack so that the packaged end result runs quickly and efficiently in the browser.

  • Code compression: Ways to remove redundant code, comments, simplify how code is written, and so on. You can use WebPack’s UglifyJsPlugin and ParallelUglifyPlugin to compress JS files, using csSNano (csS-Loader? Minimize) to compress CSS
  • CDN acceleration: During the build process, change the referenced static resource path to the corresponding path on the CDN. You can modify the resource path using WebPack for the output parameter and the publicPath parameter for each loader
  • Tree Shaking: Removing pieces of code that will never be seen. This can be done by appending the parameter optimize-minimize when starting WebPack
  • Code Splitting: The Splitting of Code into chunks by route dimension or component so that it can be loaded on demand and take full advantage of the browser cache
  • Extract the common third-party libraries: The SplitChunksPlugin plug-in performs the common module extraction, using the browser cache to cache the common code for a long time without frequent changes

4. How to improvewebpackBuild speed?

  1. In the multi-entry case, use the CommonsChunkPlugin to extract common code
  2. Use the externals configuration to extract the common repository
  3. Using DllPlugin and DllReferencePlugin, the precompiled resource module precompilers the NPM packages that we reference but never change. Using DllReferencePlugin, the precompiled module is loaded in.
  4. Use Happypack to speed up multithreaded compilation
  5. Use Webpack-UgLify-PARALLEL to increase the compression speed of uglifyPlugin. In principle, Webpack-Uglify-PARALLEL adopts multi-core parallel compression to improve the compression speed
  6. Use tree-shaking and Scope collieries to remove unwanted code