Website performance monitoring and optimization strategy




Introductions to the 0.

As an Internet project, the most important thing is the user experience. In the nationwide “Internet +” boom, the user first has been accepted by most enterprises, especially in the era of rapid development of mobile terminals, our web pages are not only presented in the user’s PC browser, more often, the user is through mobile products to browse our web pages. With more and more developers working on Web apps and Hybrid apps, performance has once again become the focus of programmers. I once saw such a sentence: the experience of a website determines whether users are willing to understand the functions of the website; The functionality of the site determines whether users will reject the experience outright. This is a catchphrase from the Internet, but it says everything about site performance, especially on a project like a website. If a user needs more than 5 seconds to see a page, he won’t hesitate to close it.

Superior performance optimization, as an engineer’s “kung fu”, is our a cliche topic in the development, is also a developer from entry to the senior advanced period, although we have seen a lot of standards, catch, but in the real practice, are often overwhelmed, don’t know what’s left, don’t know whether there is further optimized performance space.

For website performance, there are many established indicators in the industry, but as for front-enders, we should pay more attention to the following indicators: white screen time, first screen time, whole page time, DNS time and CPU usage. And a website THAT I built myself (url: jerryonlyzrj.com/resume/, due to domain name record cannot be opened recently, a few days later will be restored to normal), did not do performance optimization, the first screen time is 12.67s, finally after many optimization, finally reduced to 1.06s, and has not configured CDN acceleration. Among them, I stepped on a lot of pits, and also turned a lot of professional books, and finally decided to sort out the efforts of these days, to help front-end lovers less detours.

Today, we will introduce three aspects of performance optimization step by step, including network transmission performance, page rendering performance and JS blocking performance, and systematically bring readers to experience the practical process of performance optimization.


1. Optimize network transmission performance

Before we get into the work of optimizing network traffic performance, we need to understand how browsers process user requests, so this is the magic map:



This is the navigation Timing monitor chart. As you can see, after the browser receives a user request, it goes through the following stages: redirection → pull cache →DNS query → establish TCP link → initiate request → receive response → process HTML element → element loading complete. Take your time, we’ll go through the details step by step:

1.1. Browser cache

We all know that before a browser sends a request to the server, it will check whether there is a local file, and if there is, it will pull the local cache directly. This is similar to Redis and Memcache, which act as an intermediate buffer. Let’s take a look at the browser’s caching strategy:



Because the images online are so general, and I’ve gone through a lot of articles on caching, with very little systematic sorting out of status codes and when to cache in memory and when to cache on disk, I drew a flowchart of the browser’s caching mechanism. Use this graph to illustrate the browser’s caching mechanism in more detail.

Here we can use the Network panel in Chrome DevTools to view information about network traffic:

(It is important to note here that we need to remove the Disable Cache check at the top of the Network panel when we are debugging the cache, otherwise the browser will never pull data from the cache)



The default browser cache is stored in memory, but we know that memory caches are cleared when the process ends or the browser shuts down, while hard disk caches are retained for long periods of time. Most of the time, we will see two different states in the size of each request in the Network panel: from Memory cache and from Disk cache. The former refers to the cache coming from memory and the latter refers to the cache coming from disk. The location of the cache is controlled by the Etag field we set on the server. When the browser receives the server response, it checks the response Header and writes the cache to hard disk if there is an Etag field.

There are two different status codes, 200 and 304, depending on whether the browser has sent an authentication request to the server. The 304 status code is returned only if a verification request is made to the server and the cache is not updated.

Using nginx as an example, I’ll talk about how to configure caching:

First, let’s go to the nginx configuration document

$ vim nginxPath/conf/nginx.conf

Insert the following two items into the configuration document:

etag on; // Open eTag to verify expires 7d; // Set the cache expiration time to 7 daysCopy the code

Open our website, look at our request resources in the Network panel of Chrome DevTools, and if we see the Etag and Expires fields in the response header, our cache configuration is successful.



Special attention!! Must bear in mind when we configure the cache, the browser to handle user request, if hit the cache, the browser will directly pull the local cache, not any communication with the server, that is to say, if we have updated the file on the server side, will not be a browser that can’t replace cache invalidation. Therefore, during the construction phase, we need to add md5 hash suffixes to our static resources to avoid the problem that the front and back end files cannot be synchronized due to resource updates.

1.2. Resource packaging and compression

Our previous browser caching efforts only worked when the user visited our page the second time, and resources had to be optimized to achieve good performance the first time the user opened the page. We often think of network performance optimization measures as three main aspects: reducing the number of requests, reducing the volume of requested resources, and improving the network transmission rate. Now, let’s take them one at a time:

I recommend Webpack. I usually use Gulp and Grunt to compile nodes. Parcels are too new, and Webpack has always been closer to parcels in its features.



When configuring WebPack to go live, we need to pay special attention to the following:

JS compression :(this point should be considered familiar, not much to introduce)

new webpack.optimize.UglifyJsPlugin()
Copy the code

(2) HTML compression:

new HtmlWebpackPlugin({
            template: __dirname + '/views/index.html', // new an instance of the plugin and pass the relevant argument filename:'.. /index.html',
            minify: {
                removeComments: true,
                collapseWhitespace: true,
                removeRedundantAttributes: true,
                useShortDoctype: true,
                removeEmptyAttributes: true,
                removeStyleLinkTypeAttributes: true,
                keepClosingSlash: true,
                minifyJS: true,
                minifyCSS: true,
                minifyURLs: true,
            },
            chunksSortMode: 'dependency'
        })
Copy the code

When we use the HTml-Webpack-Plugin to automate JS injection and CSS packaging of HTML files, we rarely add configuration items to them. Here is an example that you can copy directly.

PS: Here’s a trick: when we write the SRC or href attribute of an HTML element, we can omit the protocol part, which is easy to save resources.

③ Extraction of public resources:

new webpack.optimize.CommonsChunkPlugin({
            name: 'vendor',
            filename: 'scripts/common/vendor-[hash:5].js'
        })
Copy the code

PS: Here is the webPack3 syntax, in WebPack4 has been changed, I hope you notice

④ Extract and compress the CSS:

When using Webpack, we usually introduce CSS files in the form of modules (the idea of Webpack is that everything is a module), but when we go online, we also need to extract and compress these CSS files. This seemingly complicated process only requires a few simple lines of configuration:

(PS: Extract -text-webpack-plugin is required, so you need to NPM install it)

const ExtractTextPlugin = require('extract-text-webpack-plugin')
module: {
        rules: [..., {
            test: /\.css$/,
            use: ExtractTextPlugin.extract({
                fallback: 'style-loader',
                use: {
                    loader: 'css-loader',
                    options: {
                        minimize: true}}})}]}Copy the code

(5) use webpack3 new features: ModuleConcatenationPlugin

new webpack.optimize.ModuleConcatenationPlugin()
Copy the code

If you can according to the above five webpack online configuration complete configuration out, basically can compress the volume of file resources to the extreme, if there is a leak, but also hope that we can add.

Here is a copy of my WebPack online configuration document, welcome to refer to:

//webpack.pro.js
const webpack = require('webpack')
const HtmlWebpackPlugin = require('html-webpack-plugin')
const ExtractTextPlugin = require('extract-text-webpack-plugin')
const CleanWebpackPlugin = require('clean-webpack-plugin')
const CopyWebpackPlugin = require('copy-webpack-plugin')
module.exports = {
    entry: __dirname + '/public/scripts/index.js',
    output: {
        path: __dirname + '/build/static', // where the packed file is stored filename:'scripts/[name]-[hash:5].js'// Output the file name after packaging, with MD5hashResolve: {extensions: ['.jsx'.'.js']
    },
    module: {
        rules: [{
            test: /(\.jsx|\.js)$/,
            use: {
                loader: 'babel-loader'}, exclude: /node_modulestest: /\.css$/,
            use: ExtractTextPlugin.extract({
                fallback: 'style-loader',
                use: {
                    loader: 'css-loader',
                    options: {
                        minimize: true
                    }
                }
            })
        }]
    },
    plugins: [
        new HtmlWebpackPlugin({
            template: __dirname + '/views/index.html', 
            filename: '.. /index.html',
            minify: {
                removeComments: true,
                collapseWhitespace: true,
                removeRedundantAttributes: true,
                useShortDoctype: true,
                removeEmptyAttributes: true,
                removeStyleLinkTypeAttributes: true,
                keepClosingSlash: true,
                minifyJS: true,
                minifyCSS: true,
                minifyURLs: true,
            },
            chunksSortMode: 'dependency'
        }),
        new ExtractTextPlugin('styles/style-[hash:5].css'),
        new CleanWebpackPlugin('build/*', {
            root: __dirname,
            verbose: true,
            dry: false
        }),
        new webpack.optimize.UglifyJsPlugin(),
        new CopyWebpackPlugin([{
            from: __dirname + '/public/images',
            to: __dirname + '/build/static/images'
        }, {
            from: __dirname + '/public/scripts/vector.js',
            to: __dirname + '/build/static/scripts/vector.js'
        }]),
        new webpack.optimize.ModuleConcatenationPlugin(),
        new webpack.optimize.CommonsChunkPlugin({
            name: 'vendor',
            filename: 'scripts/common/vendor-[hash:5].js'}})]Copy the code

Finally, we should also enable Gzip transfer compression on the server, which can reduce the size of our text class files to about a quarter of their previous size. For immediate results, switch to our Nginx configuration file and add the following two configuration items:

gzip on;
gzip_types text/plain application/javascriptapplication/x-javascripttext/css application/xml text/javascriptapplication/x-httpd-php application/vnd.ms-fontobject font/ttf font/opentype font/x-woff image/svg+xml;
Copy the code

If you see a field like this in the response header of a website request, then Gzip compression has been configured successfully:



Special attention!! Do not Gzip image files! Do not Gzip image files! Do not Gzip image files! I will tell you only counterproductive, as for the specific reason, also have to consider the server CPU usage in the process of compression and compression ratio index, to compress images not only takes up a lot of resources background, compression effect is not significant, can be said to be “more harm than good”, so please remove the images of related items in gzip_types. For the image related processing, we will be more specific next.


1.3. Image resource optimization

Just now we introduced the resource packaging compression, just stay at the code level, and in our actual development, the real use of a lot of network transmission resources, is not these files, but the image, if you optimize the image, you can immediately see the obvious effect.

1.3.1. Do not scale images in HTML

Many developers have this illusion (I used to) that you use a 400 * 400 image in a 200 * 200 disk. We even think that this gives the user a sharper image. It doesn’t make the image any clearer to the user, but all this leads to slower acceleration and a waste of bandwidth. You may not know that a 200KB image and a 2M image can take 200ms or 12s to transfer (experience, suffer (┬ _ ┬)). So, when you need to use a large image, have a lot of large images on the server, and try to fix the image size.

1.3.2. Using CSS Sprite

The Sprite diagram is a concept you hear a lot about in development, but it is a demonstration of how to reduce the number of requests. And the amazing thing is that when multiple images are put together, the total volume is smaller than the total volume of all the previous images (you can try this for yourself). Here are a Sprite figure automatic generation tool: www.toptal.com/developers/… (Photo is from the homepage of the official website)



As long as you add the relevant resource file, it will automatically help you to generate Sprite diagram and the corresponding CSS style, you just need to download and copy.

1.3.3. Use iconfont

Whether it is compressed pictures, or Sprite, or pictures, as long as it is pictures, it will still take up a lot of network transmission resources. But with the advent of font ICONS, front-end developers see another magical world.

I like to use the Ali vector icon library (website: www.iconfont.cn/), which has a large number of vector map resources, and you only need to buy like taobao to add them to the shopping cart can take them home, after finishing the resources can also automatically generate CDN link, can be said to be a perfect one-stop service. (Photo is from the homepage of the official website)



Graphics can do a lot of the things that vector graphics can do, except to insert characters and CSS styles into HTML. They are not on the same order of magnitude in terms of network transfer resources as image requests. If your project has a lot of small ICONS, use vector graphics instead.

1.3.4. Using WebP

WebP is an image format developed by Google to speed up the loading of images. The image compression volume is only about 2/3 of JPEG, and can save a lot of server bandwidth resources and data space. Well-known sites like Facebook and Ebay are already testing and using the WebP format.

We can use the Linux command line tool provided on our website to WebP the images in the project, or we can use our online service, here I recommend Flash cloud (www.upyun.com/webp). But in the actual online work, we still have to write Shell scripts and use command line tools to batch code, but in the test phase we use the online service is enough, convenient and fast. (Photo is from the official website of Forkshot Cloud)



1.4. Network transmission performance detection tool — Page Speed

In addition to the Network section, Chrome also has a plugin for monitoring network performance. Page Speed is the official Page Speed image for the cover of our article (because I think this is the perfect image). To install it, go to Chrome DevTools: Chrome menu, more tools, Extensions, Chrome Web Store, Pagespeed, and install it.

(PS: To use the Chrome App Store, you need to climb the wall, so I won’t go into more details.)

This is the Page Speed function interface:



We just need to open the webpage to be tested and click the Start analyzing button in Page Speed, then it will automatically help us test the network transmission performance. Here is the test result of my website:



The most humane aspect of Page Speed is that it makes a complete recommendation for testing your site’s performance bottlenecks, and you can optimize it according to its hints. Here my site is optimized to its best (•́⌄•́ jun)૭✧, Page Speed Score means your performance test Score, and 100/100 means there is no further optimization left.

After optimizing, we used the Network section of Chorme DevTools to measure the white screen time and the first screen time of our pages. Did you get a big improvement?

1.5. Use the CDN

Last but not least,

No matter how good the performance optimization instance is, it must be supported by CDN to reach the extreme.

If we use the $traceroute targetIp command on Linux or the batch > tracert targetIp command on Windows, we can locate all the routers that pass between the user and the target computer. It goes without saying that the farther away the user is from the server, The more routers you pass, the higher the latency. One of the purposes of using CDN is to solve this problem, but not only that, but also to share IDC’s pressure.

Of course, with our individual financial strength (unless you are Wang Sicong) is certainly not set up CDN, but we can use the services provided by major enterprises, such as Tencent cloud, configuration is also very simple, here please go to the review.

In fact, our CDN domain name is generally different from the main domain name of our website. You can take a look at the official website of Taobao and Tencent to see the CDN domain name of their static resources, which is different from the main domain name. Why are you doing this? There are two main reasons: [content selected from: bbs.aliyun.com/simple/t116…].

(1) The CDN service is independent and the cache can be configured independently. To reduce the pressure on the Web, the CDN system will follow the cache-Control and Expires HTTP header standards to Cache the content returned by a change request, so that subsequent requests do not return to the source and can be accelerated. In the traditional CDN (domain name shared by the Web and CDN), corresponding Cache rules need to be set for different types of files or follow the HTTP header at the back end. However, it is difficult to give full play to the advantages of CDN, because the probability of dynamic requests returning to the source is very high. If the line between the visitor and the source is not slow, Requests through the CDN are not necessarily faster than direct requests to the source. Large websites in order to improve web performance to the extreme, usually set the cache header is large, such as Google JS set a year cache, Baidu home page logo set ten years cache, if the static elements extracted, it can be very convenient for all static elements deployment rules, without considering the dynamic request. Reducing the number of rules can improve the efficiency of the CDN.

② Discard useless cookies and reduce bandwidth usage. As we all know, HTTP protocol will automatically bring the domain name and parent domain name cookies each time you send a request, but for CSS, JS and image resources, these cookies are useless, but will waste the visitor bandwidth and server bandwidth. Our master site, in order to maintain the session or other caching, will store a large number of cookies, so if the CDN and master domain name separation, can solve this problem.

However, a new problem arises: the CDN domain name is different from the master domain name, and it takes extra time for the DNS to resolve the CDN domain name, increasing network latency. But it was the arrival of our great programmer predecessor, DNS Prefetch.

If you look at the HTML source code of a large website, you will find such a link in the head :(here take taobao home page as an example)



This is DNS Prefetch. DNS Prefetch is a DNS Prefetch technology. When browsing a web page, the browser preresolves and caches the domain names in the web page when loading the web page. In this way, the browser does not need to perform DNS resolution when loading the links in the web page, reducing the waiting time of users and improving user experience. Currently, DNS Prefetch is supported by mainstream browsers, and most browsers are optimized for DNS resolution. A typical DNS resolution takes 20-120ms. Therefore, it is a good optimization measure to reduce the time and times of DNS resolution. Here is a map of DNS Prefetch support from the Can I Use IT website:



So go ahead and use it.


2. Optimize page rendering performance


2.1. Browser Rendering Process (Webkit)



In fact, you should be familiar with the browser’s HTML rendering mechanism. The basic process is described in the figure above. When you get started, your mentor or predecessor may tell you that you should reduce rearrangements and redraws in rendering because they affect browser performance. But you don’t know how it works, do you? Today we’re going to introduce some of those deep concepts with Webkit Tech Insider, which I recommend you buy because as a front-end engineer you need to know how the browser kernel works every day.

PS: its kernel is mentioned here, by the way my browser internal rendering engine, the relationship between the interpreter and other components, because often have some teacher younger brother or front fans ask me this knowledge, can’t distinguish their relationship, I took a picture to illustrate: (this part of the content has nothing to do with this article, if you are not interested in, you can skip)



The browser’s interpreter is included in the rendering engine. Chrome (currently using Blink engine), Safari (Webkit engine), and Firefox (Gecko engine) are the rendering engine. Within the rendering engine, we have our HTML interpreter (used to construct the DOM tree while rendering), our CSS interpreter (used to synthesize CSS rules while rendering), and our JS interpreter. However, as the use of JS became more and more important and complicated, the JS interpreter gradually became a separate JS engine, just like the well-known V8 engine, which is also used by Node.js.


2.2.DOM rendering layer with GPU hardware acceleration

If I told you that a page is made up of many, many layers, like lasagna, could you imagine what the page actually looks like? For the sake of your imagination, I’ve attached a layer of the 3D View plugin from Firefox:



Render Tree = Layers = Layers = Layers = Layers = Layers = Layers = Layers = Layers = Layers = Layers

① The browser takes the DOM tree and splits it into separate layers based on the style

② The CPU draws each layer into the drawing

③ Upload the bitmap as texture to the GPU for rendering

(4) THE GPU will cache all the rendering layers (if the next uploaded rendering layer does not change, the GPU does not need to redraw it) and compound multiple rendering layers to finally form our image

As we can see from the steps above, the layout is handled by the CPU, while the drawing is done by the GPU.

In fact, Chrome also provides related plug-ins for us to view the layout of the rendering layer and GPU usage (so, we have to try chrome’s inexplinexplable plug-in, really will find a lot of things are magic).

Chrome Developer Tools → More Tools →Layers

Chrome Developer Tools → More Tools → Rendering

After doing this, you should see something like this in your browser:



There are so many things, let’s break them down into modules:

(1) The first is the small black window at the top right of the page: in fact, the prompt has been very clear, it shows our GPU usage, which can let us know clearly whether a lot of redrawing has taken place on the page.

(2) Layers: This is the tool used to display the DOM rendering Layers we just mentioned. The list on the left will show which Layers are present on the page and the details of those Layers.

This is in the same place as our console, so don’t lose sight of it. The first three boxes are the ones we use most often, so let me explain what they do (as a free translator)

1 Paint flashing: After being checked, the redrawn element on the page will be highlighted

②Layer Borders: Similar to our Layer section, it highlights each rendering Layer on our page with a highlighted boundary

③FPS Meter: the small black window we mentioned in (1) is enabled to observe our GPU usage

You may ask me, what’s the use of mentioning such a deep DOM rendering layer, as if it has nothing to do with performance optimization? Remember I mentioned that the GPU caches all of our render layers, so imagine if we took the elements that have been experiencing a lot of rearrangements and redraws, and triggered a separate render layer, then that element wouldn’t have to be redrawn with all the other elements, right?

So the question is, under what circumstances will the render layer be triggered? Just remember:

Video, WebGL, Canvas, CSS3 3D, CSS filter, z-index greater than one of the adjacent elements will trigger a new Layer.

transform: translateZ(0);
backface-visibility: hidden;Copy the code

This will trigger the render layer.

We separate the elements that trigger rearrangements and redraws from the render layer, isolating them from the “static” elements and sharing more of the rendering work with the GPU. We usually refer to this measure as hardware acceleration, or GPU acceleration. I’m sure you’ve heard this expression before, and now you know exactly how it works.

2.3. Rearrangement and redraw

Now it’s time for our big show, rearranging and redrawing. Throw out the concept first:

(1) Reflow: Changes to the layout of elements in the render layer cause the page to rearrange, such as changing the size of the window, removing or adding DOM elements, and modifying CSS properties (such as width, height, padding) that affect the size of the element box.

(2) Repaint: Paint, that is, render color, any modification to the visual presentation properties of an element, will cause a repaint.

We’re used to using the Performance section of Chrome DevTools to measure how long it takes to rearrange and redraw a page:



(1) The blue part: the time consumed by HTML parsing and network communication

② The yellow part: the time taken by JavaScript statement execution

③ The purple part: rearrange the occupancy time

④ The green part: redraw the occupation time

Rearranging or redrawing will block the browser. To improve web page performance, you need to reduce the frequency and cost of rearrangements and redraws, which may trigger less rerendering. As we mentioned in 2.3, rearranging is handled by CPU, while redrawing is handled by GPU, which is far less efficient than GPU, and rearranging always leads to redrawing, while redrawing does not necessarily lead to rearranging. Therefore, in performance optimization efforts, we should focus on reducing the occurrence of rearrangements.

Here’s a website that details which CSS properties trigger rearrangements or redraws in different rendering engines:

Csstriggers.com/ (photo from official website)



2.4. Optimize the policy

Talk about so many theories, but the most practical, is the solution, everyone must be anxious, get ready, a wave of dry goods to attack:

(a) CSS attributes read and write separation: every time the browser to the element style read operation, must be re-render (rearrangement + redraw), so we use JS element style read and write operation, the best to separate the two, read and write first, to avoid the case of the two cross use. The most objective solution, which I recommend most, is not to use JS to manipulate element styles.

(2) Batch manipulation of element styles by switching classes or using the style.csstext attribute of the element.

(3) DOM element offline update: When performing operations on the DOM, examples or appendChild can use a Document Fragment object to insert the element into the page once it has been assembled, or display: None to hide the element and perform operations after it has “disappeared.”

(4) Set the unused element to visibility: visibility: hidden, so that the pressure of repainting can be reduced, and the element can be displayed when necessary.

(5) Compress the depth of DOM. Do not have too deep child elements in a rendering layer. Use less DOM to complete the page style, and use more pseudo-elements or box-shadow instead.

(6) Specify the size of the image before rendering: Since the IMG element is an inline element, it will change the width and height after loading the image. In serious cases, the entire page will be rearranged, so it is best to specify the size before rendering, or take it out of the document stream.

(7) For the elements in the page that may occur a large number of rearrangements and redrawings, the rendering layer is triggered separately, and GPU is used to share the CPU pressure. (This strategy needs to be used with care, especially when it comes to performance improvements at the expense of GPU usage. Having too many layers on the page is an unnecessary strain on the GPU, and often, hardware acceleration is used for animation elements.)


3.JS blocking performance

JavaScript is almost certain to have a monopoly on web development. Even for the most simple static page, you will probably see JS in your site. Without JS, you can say, there is no user interaction. However, the problem with scripts is that they block parallel downloads of pages and increase the CPU usage of processes. What’s more, now that Node.js is so ubiquitous in front-end development, we could cause a memory leak or write an infinite loop in our code that could crash our server. In today’s era when JS has spread all over the front and back ends, the performance bottleneck is not only affecting the user experience, but also more serious problems. The performance optimization of JS should not be underestimated.

In the process of programming, if we use the not related resources would be released after closure, or references to the chain after not empty it (for example a DOM element binding event callback, it turned out to remove the element), can create a memory leak occurs, then the CPU load, cause caton or crashing. You can use the JavaScript Profile section provided by Chrome and open it in the same way as Layers, so I won’t go into this and go straight to the renderings:



If I add a line while(true){}, then the usage will jump to an exception (93.26%).

Browser powerful memory recovery mechanism in most of the time, to avoid the occurrence of this situation, even if the user the crash, he as long as the end of the relevant process (or close the browser) can solve this problem, but we want to know, the same will happen in our server, which is our node, the serious situation, It will bring down our servers and crash our website. More often than not, we use the JavaScript Profile section to stress test our Node services. With the Node-Inspector plug-in, we can more effectively detect the CPU usage of various functions during JS execution and optimize them accordingly.

(PS: so do not use closures on the server until you have developed to a certain level. One is that it is not useful, we will have more excellent solutions, and the other is that it is very easy to leak memory, resulting in consequences that you cannot expect.)


4. [Extension] Load balancing

The reason for using load balancing as an extension is that if you’re building your own personal site, or a small or medium-sized site, you don’t really need to consider the amount of concurrency, but if you’re building a large site, load balancing is an essential part of the development process.


4.1.Node.js handles IO intensive requests

The current development process focuses on the separation of front and back ends, which is often referred to as “high cohesion and low coupling” in software engineering. You can also think of it in terms of modularity. The decoupling of front and back ends is the same as dividing a project into two large modules, the front end and the back end, which are connected by interfaces and developed separately. What good would it do? I’ll take the most practical one: “asynchronous programming.” This is my own name, because I think the form of decoupling is very like our JS asynchronous queue, the traditional development mode is “synchronous”, front-end needs to wait for back-end encapsulation interface, know what data can be taken, then to develop, short time, large project. With decoupling, we only need to specify the interface in advance, and the front and back ends can be developed simultaneously, which is not only efficient but also time saving.

As we all know, the core of Node is event-driven, which handles user requests asynchronously through Event Loop. Compared with traditional back-end services, each request of users is assigned a process for processing. I recommend you to read this blog post: mp.weixin.qq.com/s?__biz=MzA… . Especially vivid explanation of the event – driven operation mechanism, easy to understand. What are the biggest advantages of being event-driven? It’s very important for live streaming sites, and we’ve had success with that — fast, fast strong I/O high concurrency — and the nature of that can be traced back to Node.

In fact, today’s enterprise website, will build a layer of Node as the middle layer. A general framework of the site looks like this:



4.2. Pm2 implements Node.js “Multi-process”

We all know the pros and cons of the node, share a link here, looking for a quite long time to write is detailed: www.zhihu.com/question/19… . A lot of this is the same old story, and those who say node doesn’t work are pointing to the fact that node is a single process. We have a solution — PM2. Here’s its website: pm2.keymetric.io /. It’s a Node.js process manager that can start a Node.js service on every core of your computer. If your computer or server is a multi-core processor, it can start multiple Node.js services, and it can automatically control load balancing. User requests are automatically distributed to less stressful service processes. It sounds like a real artifact! And its functions are far more than these, here I will not make too much introduction, you know we need to use it when online on the line, the installation method is very simple, directly use NPM to the global can be $NPM I pm2-g specific use methods and related features can refer to the official website.

The following is the effect of PM2 after startup:



4.3. Nginx constructs the reverse proxy

Before you start building, you need to know what a reverse proxy is. You may be unfamiliar with this term, but here’s a picture:



The proxy is what we usually call the intermediary. The reverse proxy of a website is the server that is between the user and our real server. Its function is to distribute the user’s request to the server with less pressure. After hearing this sentence feels familiar, yes, when I introduce pm2 also said the same thing, the reverse proxy play a role as pm2 also realize the load balance, you should also know that now the difference between the two, the reverse proxy is the server load balancing, load balancing and pm2 is on process. You if you want to have a thorough understanding of the reverse proxy related knowledge, I recommend zhihu a post: www.zhihu.com/question/24… . But you will think, the server is the operation of things ah, and our front end what relationship? Indeed, in this section, we do little more than provide operations with a configuration document.

http {
    upstream video {
        ip_hash;
        server localhost:3000;
    }
    server {
        listen: 8080;
        location / {
            proxy_pass: http://video
        }
    }
}Copy the code

That is to say, when docking with operation and maintenance, we only need to change the above several lines of code to our configured documents and send them to him. Other things, operation and maintenance will understand, no need to say, all in the wine.

But how do I change these lines of code? First of all, modules in Nginx fall into three main categories: handlers, Filters, and upstreams. “Upstream” module, which is responsible for receiving, processing and forwarding network data, is the module we need to use in reverse proxy. Here’s what the contents of the configuration code mean:

4.3.1. Upstream Configuration Information

The identifier immediately following the upstream keyword is our custom project name to which we add our configuration information with a pair of curly braces.

Ip_hash: controls whether the user is connected to the last server when the user accesses the server again

“Server” is the address of the server where the project is located. Otherwise, how will operation know that you have placed the project on the server?

4.3.2. Server configuration information

“Server” is the basic configuration of nginx. We need to apply our defined upstream to the server through the server.

Listen Keyword: port that the server listens to

The location keyword serves the same function as the routing we talked about earlier in the Node layer, where the user’s request is assigned to the corresponding upstream


5. Expand your reading

Site performance and monitoring is a complex project, and there is much work to be done in the future. What I have mentioned before is just the tip of the iceberg.

Looked through many books related to site performance in later, I still to prefer tang elder by large sites performance monitoring, analysis and optimization, the inside of the knowledge is relatively new, practical, at least I read it again after a very fruitful and sobering, I also hope that interested in the performance of the reader can look after my article to reading a book.


Source: Website performance optimization actual battle — from 12.67s to 1.06s story — Tencent Web front end IMWeb team community


Wechat mini program Development -NEXT degree course registration hot, interested partners quickly click on the picture, understand the details of the course!