This article is a modification of the previous article with the same name, updating all webpack3 content to Webpack4, and adding the automation thought that THE author has learned in the company recently, and further improving the content of the article.

Lead it

There are many established metrics for website performance in the industry, but in terms of the previous er, we should pay more attention to the following metrics: white screen time, first screen time, full page time, DNS time, CPU usage. And I built a website (url: http://jerryonlyzrj.com/resume/, recently because the domain name for the record could not be opened, a few back to normal) in the future, no do performance optimization, the first screen time is 12.67 s, finally after optimization, and finally to be reduced to 1.06 s, and has not been configured CDN acceleration. In the process, I stepped on a lot of pits, but also turned over a lot of professional books, and finally decided to put these days of efforts into writing, to help front-end enthusiasts avoid detours.

Article after the update may not real-time synchronization on the BBS, welcome to pay attention to my lot, I’ll update the latest articles in the corresponding project, let us together in the sea of code on: https://github.com/jerryOnlyZRJ

Today, we will gradually introduce the three aspects of performance optimization, including network transmission performance, page rendering performance and JS blocking performance, systematically taking readers to experience the practical process of performance optimization.

1. Network transmission performance optimization

Before we dive into the work of optimizing network transport performance, we need to understand how browsers handle user requests, so here’s the magic map:

This is the navigation Timing monitoring indicator chart, from which we can see that after the browser receives the user request, it goes through the following stages: redirection → pull cache →DNS query → establish TCP link → initiate request → receive response → process HTML element → complete element loading. Take your time. We’ll discuss the details step by step.

1.1 Browser Cache

As we all know, before making a request to the server, the browser will first check whether there is the same file in the local cache. If there is, the browser will directly pull the local cache. This is similar to Redis and Memcache in the background, which both play the role of intermediate buffer.

Because the pictures on the web are too general, and I’ve read a lot of articles on caching, there’s very little systematic sorting out of the status code and when the cache is stored in memory and when the cache is cached on disk, so I drew my own flowchart for the browser cache mechanism. Use this diagram to further illustrate the browser caching mechanism.

Here we can use the Network panel in Chrome DevTools to view network transport information:

(Note that the Disable cache check at the top of the Network panel needs to be removed when we debug the cache, otherwise the browser will never pull data from the cache.)

The default browser cache is in memory, but we know that the in-memory cache will be cleared when the process ends or the browser closes, while the cache on hard disk can be retained in the long term. Most of the time, in the network panel, we will see two different states: from memory cache and from disk cache. The former refers to the cache from memory, and the latter refers to the cache from disk. The only thing that controls where the cache is stored is the Etag field we set on the server. When the browser receives the response from the server, it checks the response Header and writes the cache to hard disk if there is an Etag field.

The pull cache has two different status codes, 200 and 304, depending on whether the browser has made an authentication request to the server. The 304 status code is returned only if a validation request is made to the server to confirm that the cache has not been updated.

Here I’ll use nginx as an example to talk about how to configure caching.

First, let’s go to the nginx configuration documentation:


Insert the following two items in the configuration document:

Open our website and look at our request resources in the Network panel of Chrome DevTools. If you see the Etag and Expires fields in the response header, your cache configuration is successful.

【 special attention!! 】 Must bear in mind when we configure the cache, the browser to handle user request, if hit the cache, the browser will directly pull the local cache, not any communication with the server, that is to say, if we have updated the file on the server side, will not be a browser that can’t replace cache invalidation. Therefore, in the construction phase, we need to add MD5 hash suffixes for our static resources to avoid the synchronization problem of the front and back end files caused by resource updates.


1.2 Resource packaging and compression

The browser caching we’ve done before only works the second time a user visits our page, and resources must be optimized to achieve good performance the first time a user opens the page. We often boil down network performance optimization measures into three aspects: reducing the number of requests, reducing the volume of requested resources, and improving the network transmission rate. Now, let’s break it down one by one:

With front-end engineering in mind, we often need the help of packaging tools to automate the packaging and compilation of live files. I recommend WebPack. I usually use Gulp and Grunt to compile Nodes. And WebPack has been moving closer to Parcel in its own right.

When configuring WebPack to go live, we should pay special attention to the following points:

1.JS compression :(this point should be a familiar one, I will not introduce more)



2. The HTML compression:

When we use htML-webpack-plugin to automatically inject JS and package HTML files with CSS, we rarely add configuration items to them. Here I give an example, you can copy it directly. In Webpack5, the functionality of the HTMl-Webpack-plugin will be integrated into webpack like the common-chunk-plugin, so we don’t need to install additional plug-ins.

PS: Here’s a trick: when we write the SRC or href attributes of HTML elements, we can omit the protocol part, which is also easy to save resources.


  • 3. Extracting public resources:

  • 4. Extract and compress CSS:


  • When using WebPack, we usually import CSS files as modules (the idea of WebPack is that everything is a module), but when we go live, we need to extract and compress these CSS files. These seemingly complicated processes only require a few simple lines of configuration:

    (PS: We need to use the mini-CSs-extract-plugin, so you have to NPM install)

    I configured the preprocessor postCSS here, but I extracted the configuration into a separate file postcss.config.js, where CSsnano is a great CSS optimization plugin.


  • 5. Change webPack development environment to production environment:

  • When using WebPack to package a project, it often introduces some debugging code for debugging purposes, which we don’t need when we go live.

    If you can complete the configuration of webpack online according to the above six points, you can basically compress the volume of file resources to the extreme, if there are omissions, we also hope you can supplement.

    Finally, we should also enable Gzip transfer compression on the server, which reduces the size of our textlike files to a quarter of their original size. The effect is immediate. Again, switch to our nginx configuration file and add the following two configuration items:

    【 special attention!! 】 Do not Gzip image files! I will tell you only counterproductive, as for the specific reason, also have to consider the server CPU usage in the process of compression and compression ratio index, to compress images not only takes up a lot of resources background, compression effect is not significant, can be said to be “more harm than good”, so please remove the images of related items in gzip_types. We will introduce the processing of images in more detail next.


    1.3 Image resource optimization

    Just now we introduced the resource packaging compression, just stayed at the code level, but in our actual development, the real use of network transmission resources, not these files, but the image, if you optimize the image, you can immediately see the obvious effect.



    1.3.1 Do not zoom images in HTML

    A lot of developers (I used to) have the illusion that we use a 400 * 400 image inside a 200 * 200 image container for convenience, even though we think it makes the image look sharper. Users don’t feel any clearer when they zoom in on the larger image, but all of this causes web pages to speed up slower and waste bandwidth. You may not know that a 200KB image and a 2M image transfer time can be 200m and 12s different (personally experienced, very much affected (┬ _ ┬)). So, when you need to use large images, have a lot of large images on the server, try to fix the image size.


    1.3.2 Using CSS Sprite

    Sprite diagram concept you must have heard a lot in life, in fact, Sprite diagram is a significant use of reducing the number of requests. And strangely enough, when multiple images are placed together, the total volume is smaller than the sum of all the previous images (try it yourself). Here are a Sprite figure automatic generation tool: https://www.toptal.com/developers/css/sprite-generator (image from the website home page)

    Once you add the relevant resource file, it will automatically generate the Sprite image and the corresponding CSS style for you.

    In fact, we have a more automatic method in the project, is a Sprite image generation plug-in Webpack-Spritesmith. First of all, a brief introduction to the idea of using plug-ins to generate Sprite images:

    First, we will place the small ICONS we need in a folder for easy management:

    (The @2x image here is an image resource for retina 2x screen adaptation. Webpack-spritesmith has configuration items specifically for retina 2X screen adaptation, which will be covered later)

    Then, we need the plugin to read all the image resource files in this folder, generate a Sprite image with the folder name as the image name to the specified location, and output the CSS file that can use these Sprite images correctly.

    Now, the Webpack-Spritesmith plugin can do everything we want. Here’s the configuration:

    You may refer to specific webpack – spritesmith official document: https://www.npmjs.com/package/webpack-spritesmith

    After executing Webpack, the results of the above two images will be generated in the development directory, and we can look at common.css:

    As you can see, all of the images we put in the Common folder are automatically styled. We don’t need to do this manually. The webpack-Spritesmith plugin does this for us!


    1.3.3 Using Font ICONS (Iconfont)

    Whether it is a compressed picture, or a Sprite picture, or a picture, as long as it is a picture, it will still occupy a lot of network transmission resources. But with the advent of font ICONS, front-end developers see another magical world.

    My favorite is Ali Vector icon library (website: http://www.iconfont.cn/), which has a large number of vector map resources, and you just need to add them to the shopping cart like taobao procurement can take them home, after sorting out the resources can also automatically generate CDN links, can be said to be a perfect one-stop service. (Picture from official website home page)

    Vector graphics can do a lot of the things that images can do, and it’s just inserting characters and CSS styles into HTML. It’s not an order of magnitude of resource use compared to image requests. If you have small ICONS in your project, use vector graphics.

    But what if we’re working on a corporate or team project that uses a lot of custom font ICONS and cute design girls just throw you copies of.SVG images?

    In fact, it is very simple, Ali vector icon library provides the function of uploading local SVG resources, here is another recommended website – IComoon. Icomoon also provides the ability to automatically convert SVG images to CSS styles. (Image from icomoon home page)

    We can click the “Import Icons” button to Import our local SVG resources, then select them, and then we leave it to icomoon to generate CSS. The specific operation is similar to ali vector icon library.


    1.3.4 using WebP

    WebP is an image format developed by Google to speed up image loading. Image compression volume is only about 2/3 of JPEG, and can save a lot of server bandwidth resources and data space. Well-known sites like Facebook and Ebay are already testing and using the WebP format.

    We can use the website to provide the Linux command line tool to project the image of WebP coding, also can use our online service, here I recommend the pirouette cloud (https://www.upyun.com/webp). But in the actual online work, we still have to write Shell scripts and use command line tools for automatic compilation. In the test phase, online services are convenient and quick. (Photo from The official website of Foropaiyun)

    1.4 Network Transmission performance detection tool — Page Speed

    In addition to the Network section, Chrome also has a plugin for monitoring network performance called Page Speed, which is featured on the cover of this article (because I think it’s perfect). To install it, go through the following steps to find it in Chrome DevTools: Chrome Menu → More tools → Extensions → Chrome Web Store → Search for Pagespeed and go.

    (PS: To use the Chrome App Store, you need to climb the wall. I won’t say how to climb the wall.)

    This is how Page Speed works:

    We just need to open the webpage to test, and then click the Start Analyzing button in Page Speed, it will automatically help us test the network transmission performance, this is my website test result:

    The best thing about Page Speed is that it gives you complete advice on how to test your site’s performance bottlenecks, and you can optimize it accordingly. •́⌄•́ danjun)૭✧, Page Speed Score means your performance Score and 100/100 means there is no more to improve.

    After optimization, use the Network section of Chorme DevTools to measure the white screen time and the first screen time of our web pages. Is it a big improvement?


    1.5 use the CDN

    Last but not least, no matter how good the performance optimization example is, it can only reach the extreme under the support of CDN.

    If we use the command $traceroute targetIp in Linux or batch > tracert targetIp in Windows, we can locate all the routers that pass between the user and the target computer. It goes without saying that the farther the distance between the user and the server, The more routers that pass through, the higher the latency. One of the purposes of using CDN is to solve this problem. CDN can also share IDC pressure.

    Of course, with our individual financial strength (unless you are Wang Sicong) is certainly not able to build a CDN, but we can use the services provided by major enterprises, such as Tencent cloud, configuration is also very simple, here please go to the deliberation.


    2. Page rendering performance optimization

    2.1 Browser Rendering Process (Webkit)

    In fact, you should be familiar with the HTML rendering mechanism of the browser. The basic process is described in the diagram above. When you are getting started, your mentor or senior may tell you that we should reduce the rearrangement and redrawing of the rendering because they affect the performance of the browser. But you don’t know how it works, do you? Today we are going to introduce some of the deeper concepts in Webkit technology Insider (a book that I highly recommend you buy, at least as a front-end engineer you need to know how the browser kernel works every day).

    PS: its kernel is mentioned here, by the way my browser internal rendering engine, the relationship between the interpreter and other components, because often have some teacher younger brother or front fans ask me this knowledge, can’t distinguish their relationship, I took a picture to illustrate: (if you are not interested in the, can skip)

    The browser’s interpreter is included in the rendering engine, the Webkit engine used by Chrome (now Blink engine) and the Webkit engine used by Safari, and the Gecko engine used by Firefox. Within the rendering engine, we also have our HTML interpreter (for constructing the DOM tree when rendering), our CSS interpreter (for composing CSS rules when rendering), and our JS interpreter. However, due to the increasingly important use of JS, more and more complicated work, so JS interpreter gradually independent, become a separate JS engine, just like the well-known V8 engine, we often contact node.js is also used by it.


    2.2 DOM rendering layer and GPU hardware acceleration

    If I told you that a page is made up of many, many layers, like a lasagna, can you imagine what the page would actually look like? For your imagination, I have attached a layer diagram of the previous Firefox 3D View plugin:

    It is composed of multiple DOM elements and Layers. In fact, after building the Render tree, a page is finally presented in front of us after going through the following process:

  • The browser takes the DOM tree and splits it into separate rendering layers based on style

  • The CPU draws each layer into the drawing

  • Upload bitmap as texture to GPU (graphics card) drawing

  • GPU caches all rendering layers (if the next uploaded rendering layer does not change, GPU does not need to redraw it) and composite multiple rendering layers to form our image

  • As we can see from the steps above, the layout is handled by the CPU and the drawing is done by the GPU.

    In chrome, there are plugins to check the layout of rendering layers and GPU usage :(so we have to try out some of chrome’s weird plugins, and find a lot of things are magic)

    Chrome Developer Tools menu → More Tools →Layers

    Chrome Developer Tools menu → More Tools → Rendering

    After doing this, you should see something like this in your browser:

    There are too many things. Let’s talk about them in modules:

    (I) The first is the small black window at the top right of the page: in fact, the prompt has been clearly stated, it shows our GPU occupancy rate, so that we can clearly know whether a large amount of redrawing has taken place on the page.

    This is the tool used to display the DOM rendering Layers we just mentioned. The list on the left will list which Layers are present on the page and the details of those Layers.

    This panel is in the same place as our console, so don’t lose sight of it. The first three boxes are the ones we use the most, so let me explain what they do (act as a free translator)

    1 Paint flashing: After this item is selected, the elements redrawn in the page will be highlighted

    ②Layer Borders: Similar to our Layer, it will highlight the layers of our page with highlighted borders

    ③FPS meter: the small black window mentioned in (1) is opened to observe our GPU occupancy rate

    You might ask me, what is the use of mentioning DOM rendering layers that have nothing to do with performance optimization? You remember I said that the GPU will cache our rendering layer, so imagine if we could extract elements that have been rearranging and redrawing a lot, and trigger a single rendering layer, that element wouldn’t be redrawing all the other elements together.

    Which begs the question, under what circumstances will the render layer be triggered? Just remember:

    Video elements, WebGL, Canvas, CSS3 3D, CSS filters, and elements with z-index greater than a neighboring node will trigger a new Layer. In fact, the most common method is to add the following style to an element:

    This will trigger the render layer (^__^).

    We use the term hardware acceleration, or GPU acceleration, to separate elements that are prone to rearranging and redrawing from those that are “static” and allow the GPU to do more of the rendering. You’ve heard this before, and now you know exactly how it works.


    2.3 Rearrangement and redrawing

    Now it’s time for our main act, rearranging and redrawing. First throw out the concept:

  • Reflow: Changes in the layout of elements within the render layer cause the page to be rearranged, such as changing the window size, removing or adding DOM elements, and modifying CSS attributes (width, height, padding) that affect the box size of elements.

  • Repaint: To paint, to render colors, any changes to the visual properties of an element will cause repaint.

  • We use the Performance section of Chrome DevTools to measure the amount of time rearranged and redrawn pages take:

  • Blue: TIME spent on HTML parsing and network communication

  • Yellow: Time taken by JavaScript statement execution

  • Purple: Rearrangement takes time

  • Green: Redraw takes time

  • Either rearrangement or redrawing blocks the browser. To improve web page performance, the frequency and cost of rearranging and redrawing will be reduced, and rerendering will be triggered less often. As we mentioned in 2.3, rearrangement is handled by CPU, while redrawing is handled by GPU. CPU’s processing efficiency is far less than GPU’s, and rearrangement will definitely cause redrawing, while redrawing will not necessarily cause rearrangement. Therefore, in performance optimization, we should focus on reducing the occurrence of rearrangements.

    Here’s a site that lists in detail which CSS properties trigger rearrangements or redraws in different rendering engines:

    https://csstriggers.com/ (photo from official website)

    2.4 Optimization Strategy

    There are so many theories, but the most practical one is the solution. Everyone must be anxious about it. Be prepared for a big wave of dry goods to come:

    (a) CSS property read and write separation: the browser did not read the element style operation, must be a re-rendering (rearrangement + redraw), so we use JS to read and write the element style operation, it is best to separate the two, read and write first, to avoid the situation of the use of the two. The most objective solution, which I recommend, is to not use JS to manipulate element styles.

    (2) Batch manipulate element styles by toggling class or style.csstext attributes

    (iii) DOM element offline update: When performing operations on the DOM, examples, appendChild, etc., can use the Document Fragment object to perform offline operations, insert the page again after the element is “assembled”, or use display: None to hide the element and perform operations after the element is “gone”.

    (4) Set unused elements to invisible: visibility: hidden, so that you can reduce the pressure of redrawing, and display the elements when necessary.

    (5) compression of DOM depth, a rendering layer should not have too deep child elements, less DOM to complete the page style, more use of pseudo-elements or box-shadow instead.

    (6) Specify the size of image before rendering: since img elements are inline, they will change width and height after loading the image. In severe cases, the entire page will be rearranged, so it is best to specify the size before rendering, or take it out of the document stream.

    (7) Trigger the rendering layer separately for elements that may be rearranged and redrawn in a large number of pages, and use GPU to share the CPU pressure. (This strategy needs to be used with caution, considering if sacrificing GPU usage can result in predictable performance optimization, since having too many rendering layers on a page is an unnecessary strain on the GPU, and usually we use hardware acceleration for animation elements.)


    3. JS blocking performance

    JavaScript has become so dominant in web development that you can see JS on even a simple static page. Without JS, there would be almost no user interaction. The problem with scripts, however, is that they can block parallel downloads of pages and increase the CPU usage of the process. What’s more, now that Node.js is so ubiquitous in front end development, we can cause a memory leak, or accidentally write an infinite loop in our code, causing our servers to crash. In today’s era of JS has been all over the front and back end, the performance bottleneck not only stays in the impact of user experience, there will be more serious problems, the JS performance optimization work should not be underestimated.

    In the process of programming, if we use the not related resources would be released after closure, or references to the chain after not empty it (for example a DOM element binding event callback, it turned out to remove the element), can create a memory leak occurs, then the CPU load, cause caton or crashing. We can use the JavaScript Profile section provided by Chrome, which is opened in the same way as the Layers section. I don’t need to say more here.

    If I add a while(true){} line to the code, it will definitely increase the CPU usage to an abnormal index (93.26%).

    Browser powerful memory recovery mechanism in most of the time, to avoid the occurrence of this situation, even if the user the crash, he as long as the end of the relevant process (or close the browser) can solve this problem, but we want to know, the same will happen in our server, which is our node, the serious situation, It will directly cause our servers to go down and our website to crash. Most of the time, we use the JavaScript Profile section to stress test our Node services. With the Node-Inspector plugin, we can more effectively detect the CPU usage of various functions during JS execution and optimize them accordingly.

    (PS: Don’t use closures on the server side until you’ve reached a certain level of training. On one hand, it’s really useless, we’ll have more excellent solutions, and on the other hand, it’s really easy to leak memory, causing unexpected consequences.)



    4. [extension] Load balancing

    Load balancing is an extension because if you’re building your own website, or a small to medium size site, you don’t really need to worry about concurrency, but if you’re building a large site, load balancing is an integral part of the development process.

    4.1 Node.js Handles IO intensive requests

    Today’s development process focuses on front-end and back-end separation, which is often referred to as “high cohesion and low coupling” in software engineering. You can also think of it in terms of modularity. Front-end and back-end decoupling is similar to dividing a project into two large modules, which are connected through interfaces and developed separately. What’s the good of that? I’ll take the most practical one: “asynchronous programming.” This is my own name, because I think the form of decoupling before and after is very similar to the asynchronous queue in JS. The traditional development mode is “synchronous”. The front end needs to wait for the back end to encapsulate the interface and know what data can be taken before developing. After decoupling, we only need to agree on the interface in advance, and both ends can be developed at the same time, which is not only efficient and time-saving.

    As we all know, The core of Node is event-driven and asynchronously processes user requests through loop. Compared with traditional back-end services, each request of the user is allocated to an asynchronous queue for processing. https://mp.weixin.qq.com/s?__biz=MzAxOTc0NzExNg==&mid=2665513044&idx=1&sn=9b8526e9d641b970ee5ddac02dae3c57&scene=21#wech At_redirect.

    Especially vivid explanation of event-driven operation mechanism, easy to understand. What is the biggest advantage of being event-driven? Is in the high concurrency I/O, will not cause congestion, for live sites, this is crucial, we have a successful precedent — fast, fast powerful I/O high concurrency, its essence must be traced back to Node.

    In fact, now the enterprise website, will build a layer of Node as the middle layer. The outline of the site is shown below:

    4.2 PM2 implementation of Node.js “Multi-threading”

    We all know the pros and cons of the node, share a link here, looking for a quite long time to write is detailed: https://www.zhihu.com/question/19653241/answer/15993549. In fact, those who say that Node does not work are pointing to node’s single thread weakness.

    For your information, we have a solution — PM2. This is the website: http://pm2.keymetrics.io/. Node.js is a Node.js process manager that can start a Node.js service on every kernel of your computer. This means that if you have a multi-core processor on your computer or server, it can start multiple Node.js services. And it can automatically control the load balancing, will automatically distribute user requests to the low pressure service process processing. It sounds like a real artifact! And its functions are far more than these, here I will not make too much introduction, we know that we need to use it on the line, the method of installation is also very simple, directly use NPM to the global $NPM I Pm2 – G specific use method and related features can refer to the official website. Json file, which is the pM2 startup configuration file, can be configured by ourselves. For details, refer to github source code. When running, we just need to enter the command $pm2 start pm2.

    Here is the pM2 after startup:

    4.3 Setting up a Reverse proxy for Nginx

    Before you start setting up, you need to know what a reverse proxy is. If you are unfamiliar with this term, let’s start with a picture:

    A proxy is what we call an intermediary. A reverse proxy for a website is a server that sits between the user and our real server. Its function is to distribute user requests to a less stressful server through polling. After hearing this sentence feels familiar, yes, when I introduce pm2 also said the same thing, the reverse proxy play a role as pm2 also realize the load balance, you should also know that now the difference between the two, the reverse proxy is the server load balancing, load balancing and pm2 is on process.

    You if you want to have a thorough understanding of the reverse proxy related knowledge, I recommend zhihu a post: https://www.zhihu.com/question/24723688. But you will think, with the server is the operation and maintenance of things ah, and our front end what relationship? Indeed, in this part of the work, we only need to provide operations with a configuration document.


    That is to say, when connecting with operation and maintenance, we just need to change the above few lines of code into our configured document and send it to him. The operation and maintenance guy will understand the other things. Needless to say, they are all in the wine.

    But what about these lines of code? First, remember that modules in Nginx are divided into three main categories: Handler, filter, and upstream. Upstream module, which is responsible for receiving, processing and forwarding network data, is also the module we need to use in reverse proxy. Next we’ll look at what the content in the configuration code means.


    4.3.1 Upstream Configuration Information

    The identifier immediately following the upstream keyword is our custom project name, to which we add our configuration information with a pair of curly braces.

    Ip_hash: controls whether to connect to the same server when the user accesses again.

    Server keyword: the address of our real server, the content here must be filled in by us, otherwise how can operation and maintenance know that you put the project on that server, also do not know that you encapsulate a layer of Node and have to listen to port 3000.


    4.3.2 Server Configuration Information

    Server is the basic configuration of Nginx. We need to apply our defined upstream to the server via the server.

    Listen: the port on which the server listens.

    The location keyword performs the same function as the route we talked about earlier in the Node layer, where the user’s request is assigned to the corresponding upstream.


    5. Read more

    The performance and monitoring of a website is a complex work, and there are many, many follow-up work. What I have mentioned before is only the tip of the iceberg. In addition to being familiar with development specifications, it also requires the accumulation of practical experience.

    Looked through many books related to site performance in later, I still to prefer tang elder by large sites performance monitoring, analysis and optimization, the inside of the knowledge is relatively new, practical, at least I read it again after a very fruitful and sobering, I also hope that interested in the performance of the reader can look after my article to reading a book.

    Here the author also suggests that everyone at ordinary times something has nothing to do can go to see a few yahoo military rules, although it is a platitude, but pearls. It would be nice if you could remember that portal:

    www.cnblogs.com/xianyulaodi…


    Author: jerryOnlyZRJ

    From: IMWeb Front-end blog

    http://imweb.io/topic/5b6fd3c13cb5a02f33c013bd