Site performance monitoring and optimization strategyIntroductions to the 0.
As an Internet project, the most important thing is user experience. In the nationwide “Internet plus” boom, the user first has been accepted by most enterprises, especially in the era of rapid development of mobile terminals, our web pages are not only presented in users’ PC browsers, but more often, users browse our web pages through mobile products. In addition, more and more developers are involved in the development team of Web APP and Hybrid APP, and performance has once again become a topic that programmers focus on. I have seen such a sentence: the experience of a website determines whether users are willing to know the functions of the website; The function of the website determines whether users will vote against the experience of the website. This is a catchphrase adapted from the Internet, but it speaks very clearly about site performance, especially in projects like websites, if a user needs more than 5s to see the page, he will not hesitate to shut it down.
Superior performance optimization, as an engineer’s “kung fu”, is our a cliche topic in the development, is also a developer from entry to the senior advanced period, although we have seen a lot of standards, catch, but in the real practice, are often overwhelmed, don’t know what’s left, don’t know whether there is further optimized performance space.
There are many established indexes for website performance in the industry, but as far as front-enders are concerned, we should pay more attention to the following indexes: white screen time, first screen time, full page time, DNS time and CPU occupancy rate. And I built a website (url: jerryonlyzrj.com/resume/, the domain name record can not be opened recently, which will be restored to normal in a few days), the first screen time is 12.67s without performance optimization, and finally reduced to 1.06s after various optimization, and CDN acceleration is not configured. In the process, I stepped on a lot of pits, but also turned over a lot of professional books, and finally decided to put these days of efforts into writing, to help front-end enthusiasts avoid detours.
Today, we will gradually introduce the three aspects of performance optimization, including network transmission performance, page rendering performance and JS blocking performance, systematically taking readers to experience the practical process of performance optimization.
- Network transmission performance optimization
Before we dive into the work of optimizing network transport performance, we need to understand how browsers handle user requests, so here’s the magic map:
This is the navigation Timing monitoring indicator chart, from which we can see that after the browser receives the user request, it goes through the following stages: redirection → pull cache →DNS query → establish TCP link → initiate request → receive response → process HTML element → complete element loading. Take your time, we’ll go through the details step by step:
1.1. Browser caching
As we all know, before the browser sends a request to the server, it will check whether there is the same file locally. If there is, it will directly pull the local cache. This is similar to Redis and Memcache which we are in the background, and both play the role of intermediate buffer.
Because the pictures on the web are too general, and I’ve read a lot of articles on caching, there’s very little systematic sorting out of the status code and when to store the cache in memory and when to store the cache on disk, so I drew a flowchart of the browser cache mechanism myself. Use this diagram to further illustrate the browser caching mechanism.
Here we can use the Network panel in Chrome DevTools to view network transport information:
(Note that the Disable cache check at the top of the Network panel needs to be removed when we debug the cache, otherwise the browser will never pull data from the cache.)
The default browser cache is in memory, but we know that the in-memory cache will be cleared when the process ends or the browser closes, while the cache on hard disk can be retained in the long term. Most of the time, in the network panel, we will see two different states: from memory cache and from disk cache. The former refers to the cache from memory, and the latter refers to the cache from disk. The only thing that controls where the cache is stored is the Etag field we set on the server. When the browser receives the response from the server, it checks the response Header and writes the cache to hard disk if there is an Etag field.
The pull cache has two different status codes, 200 and 304, depending on whether the browser has made an authentication request to the server. The 304 status code is returned only if a validation request is made to the server to confirm that the cache has not been updated.
Here I use nginx as an example to talk about how to configure caching:
First, let’s go to the nginx configuration documentation
$ vim nginxPath/conf/nginx.conf
Insert the following two items in the configuration document:
etag on; // Enable eTAG to validate Expires 7d; // Set the cache expiration time to 7 daysCopy the code
Open our website and look at our request resources in the Network panel of Chrome DevTools. If you see the Etag and Expires fields in the response header, your cache configuration is successful.
【 special attention!! 】 Must bear in mind when we configure the cache, the browser to handle user request, if hit the cache, the browser will directly pull the local cache, not any communication with the server, that is to say, if we have updated the file on the server side, will not be a browser that can’t replace cache invalidation. Therefore, in the construction phase, we need to add MD5 hash suffixes for our static resources to avoid the synchronization problem of the front and back end files caused by resource updates.
1.2. Resource packaging and compression
The browser caching we’ve done before only works the second time a user visits our page, and resources must be optimized to achieve good performance the first time a user opens the page. We often boil down network performance optimization measures into three aspects: reducing the number of requests, reducing the volume of requested resources, and improving the network transmission rate. Now, let’s break it down one by one:
With front-end engineering in mind, we often need the help of a packaging tool to automate the packaging and compilation of live files. I recommend WebPack here. I usually use Gulp and Grunt to build Nodes.
When configuring WebPack to go live, we should pay special attention to the following points:
JS compression :(this should be familiar, not much introduction)
new webpack.optimize.UglifyJsPlugin()Copy the code
(2) HTML compression:
New HtmlWebpackPlugin({template: __dirname + ‘iews/index.html’, // new an instance of this plugin and pass in the relevant argument filename: ‘.. /index.html’, minify: { removeComments: true, collapseWhitespace: true, removeRedundantAttributes: true, useShortDoctype: true, removeEmptyAttributes: true, removeStyleLinkTypeAttributes: true, keepClosingSlash: true, minifyJS: true, minifyCSS: true, minifyURLs: true, }, chunksSortMode: ‘dependency’ })
When we use htML-webpack-plugin to automatically inject JS and package HTML files with CSS, we rarely add configuration items to them. Here I give an example, you can copy it directly.
PS: Here’s a trick: when we write the SRC or href attributes of HTML elements, we can omit the protocol part, which is also easy to save resources.
③ Extraction of public resources:
new webpack.optimize.CommonsChunkPlugin({ name: ‘vendor’, filename: ‘scripts/commonendor-[hash:5].js’ })
PS: this is the syntax of webpack3, which has been changed in webpack4
④ Extract the CSS and compress it:
When using WebPack, we usually import CSS files as modules (the idea of WebPack is that everything is a module), but when we go live, we need to extract and compress these CSS files. These seemingly complicated processes only require a few simple lines of configuration:
(We need to use the extract-text-webpack-plugin, so you need to install the extract-text-webpack-plugin)
const ExtractTextPlugin = require(‘extract-text-webpack-plugin’) module: { rules: […, { test: /\.css$/, use: ExtractTextPlugin.extract({ fallback: ‘style-loader’, use: { loader: ‘css-loader’, options: { minimize: true } } }) }] }
(5) use webpack3 new features: ModuleConcatenationPlugin
new webpack.optimize.ModuleConcatenationPlugin()Copy the code
If you can complete the webPack online configuration according to the above five points, you can basically compress the file resource volume to the extreme, if there are omissions, I hope you can supplement.
Give you a copy of my Webpack online configuration document, welcome to refer to:
//webpack.pro.js const webpack = require(‘webpack’) const HtmlWebpackPlugin = require(‘html-webpack-plugin’) const ExtractTextPlugin = require(‘extract-text-webpack-plugin’) const CleanWebpackPlugin = require(‘clean-webpack-plugin’) const CopyWebpackPlugin = require(‘copy-webpack-plugin’) module.exports = { entry: __dirname + ‘/public/scripts/index.js’, output: {path: __dirname + ‘/build/static’, ‘scripts/[name]-[hash:5].js’ // Output file name with MD5 hash}, resolve: {Extensions: [‘.jsx’, ‘.js’]}, module: {rules: [{test: / \. JSX | \. Js) $/, use: {loader: ‘Babel – loader’}, exclude: / node_modules / / / not compile}, {test: /\.css$/, use: ExtractTextPlugin.extract({ fallback: ‘style-loader’, use: { loader: ‘css-loader’, options: { minimize: true } } }) }] }, plugins: [ new HtmlWebpackPlugin({ template: __dirname + ‘/views/index.html’, filename: ‘../index.html’, minify: { removeComments: true, collapseWhitespace: true, removeRedundantAttributes: true, useShortDoctype: true, removeEmptyAttributes: true, removeStyleLinkTypeAttributes: true, keepClosingSlash: true, minifyJS: true, minifyCSS: true, minifyURLs: true, }, chunksSortMode: ‘dependency’ }), new ExtractTextPlugin(‘styles/style-[hash:5].css’), new CleanWebpackPlugin(‘build/*’, { root: __dirname, verbose: true, dry: false }), new webpack.optimize.UglifyJsPlugin(), new CopyWebpackPlugin([{ from: __dirname + ‘/public/images’, to: __dirname + ‘/build/static/images’ }, { from: __dirname + ‘/public/scripts/vector.js’, to: __dirname + ‘/build/static/scripts/vector.js’ }]), new webpack.optimize.ModuleConcatenationPlugin(), new webpack.optimize.CommonsChunkPlugin({ name: ‘vendor’, filename: ‘scripts/common/vendor-[hash:5].js’ }) ] }
Finally, we should also enable Gzip transfer compression on the server, which reduces the size of our textlike files to a quarter of their original size. The effect is immediate. Again, switch to our nginx configuration file and add the following two configuration items:
gzip on; gzip_types text/plain application/javascriptapplication/x-javascripttext/css application/xml text/javascriptapplication/x-httpd-php application/vnd.ms-fontobject font/ttf font/opentype font/x-woff image/svg+xml;
If you see this field in the response header of the website request, then we have successfully configured the Gzip compression:
【 special attention!! 】 Do not Gzip image files! Do not Gzip image files! Do not Gzip image files! I will tell you only counterproductive, as for the specific reason, also have to consider the server CPU usage in the process of compression and compression ratio index, to compress images not only takes up a lot of resources background, compression effect is not significant, can be said to be “more harm than good”, so please remove the images of related items in gzip_types. We will introduce the processing of images in more detail next.
1.3. Image resource optimization
Just now we introduced the resource packaging compression, just stayed at the code level, but in our actual development, the real use of network transmission resources, not these files, but the image, if you optimize the image, you can immediately see the obvious effect.
1.3.1. Do not zoom images in HTML
A lot of developers (and I used to) have the illusion that a 400 * 400 image will be added to a 200 * 200 image container for convenience, even though it will make the image appear sharper. Users don’t feel any clearer when they zoom in, but all this does is slow down the speed of web pages and waste bandwidth. You may not know that the transfer time of a 200KB image and a 2M image is 200ms and 12s different (experienced personally, ┬ _ ┬). So, when you need to use large images, have a lot of large images on the server, try to fix the image size.
1.3.2. Using CSS Sprite
Sprite is a concept that you must hear a lot about in development, but in fact, Sprite is a model for reducing the number of requests. And the amazing thing is that when you put multiple images together, the total volume is smaller than the sum of all the previous images (try it yourself). Here are a Sprite figure automatic generation tool: www.toptal.com/developers/… (Picture from official website home page)
As long as you add the relevant resource files, it will automatically generate Sprite images and corresponding CSS styles for you. All you need to do is download and copy.
1.3.3. Using font ICONS (Iconfont)
Whether it is a compressed picture, or a Sprite picture, or a picture, as long as it is a picture, it will still occupy a lot of network transmission resources. But with the advent of font ICONS, front-end developers see another magical world.
My favorite is Ali Vector icon library (website: www.iconfont.cn/), which has a large number of vector map resources, and you just need to add them to the shopping cart like taobao procurement can take them home, after sorting out the resources can also automatically generate CDN links, can be said to be a perfect one-stop service. (Picture from official website home page)
Vector graphics can do a lot of the things that images can do, and it’s just inserting characters and CSS styles into HTML. They’re not on the same order of magnitude of network traffic as image requests. If you have a lot of small ICONS in your project, use vector graphics.
1.3.4. Using WebP
WebP is an image format developed by Google to speed up image loading. Image compression volume is only about 2/3 of JPEG, and can save a lot of server bandwidth resources and data space. Well-known sites like Facebook and Ebay are already testing and using the WebP format.
We can use the Linux command line tool provided by the official website to carry out WebP coding for the pictures in the project, and we can also use our online service, here I recommend Foropaiyun (www.upyun.com/webp). But in the actual online work, we still have to write Shell scripts and use command line tools to do batch coding, but in the test phase, we use online services, which is convenient and quick. (Photo from The official website of Foropaiyun)
1.4. Network transmission performance detection tool — Page Speed
In addition to the Network section, Chrome also has a plugin for monitoring network performance called Page Speed, which is featured on the cover of this article (because I think it’s perfect). To install it, go through the following steps to find it in Chrome DevTools: Chrome Menu → More tools → Extensions → Chrome Web Store → Search for Pagespeed and go.
(PS: To use the Chrome App Store, you need to climb the wall. I won’t say how to climb the wall.)
This is how Page Speed works:
We just need to open the webpage to test, and then click the Start Analyzing button in Page Speed, it will automatically help us test the network transmission performance, this is my website test result:
The best thing about Page Speed is that it gives you complete advice on how to test your site’s performance bottlenecks, and you can optimize it accordingly. •́⌄•́ danjun)૭✧, Page Speed Score means your performance Score and 100/100 means there is no more to improve.
After optimization, use the Network section of Chorme DevTools to measure the white screen time and the first screen time of our web pages. Is it a big improvement?
1.5. Use the CDN
Last but not least,
No matter how good the performance optimization example is, it can only reach the extreme under the support of CDN.
If we use the command $traceroute targetIp in Linux or batch > tracert targetIp in Windows, we can locate all the routers that pass between the user and the target computer. It goes without saying that the farther the distance between the user and the server, The more routers that pass through, the higher the latency. One of the purposes of using CDN is to solve this problem. CDN can also share IDC pressure.
Of course, with our individual financial strength (unless you are Wang Sicong) is certainly not able to build a CDN, but we can use the services provided by major enterprises, such as Tencent cloud, configuration is also very simple, here please go to the deliberation.
In fact, our CDN domain name is generally different from the main domain name of our website. We can take a look at the official websites of Taobao and Tencent to see the CDN domain name they store static resources, which is different from the main domain name. Why would you do that? There are two main reasons: [content selected from: bbs.aliyun.com/simple/t116…].
(1) Facilitate the CDN service independence, can be configured independently cache. In order to reduce the pressure on the Web, the CDN system will Cache the content returned by the modified request according to the cache-Control and Expires HTTP header standards, so that the subsequent request will not return to the source and accelerate the function. However, traditional CDN (common domain name between Web and CDN) requires to set corresponding Cache rules for different types of files or follow the HTTP header at the back end, but it is difficult to give full play to the maximum advantages of CDN, because the probability of dynamic request back to the source is very large. If the line between the visitor and the source site is not slow, A request through a CDN is not necessarily faster than a direct request to the source. In order to improve the web performance to the extreme, large websites usually set a relatively large cache head, such as Google JS set a year of cache, Baidu home logo set ten years of cache, if the static elements are extracted, it can be very convenient to deploy rules for all static elements, without considering the dynamic request. Reducing the number of rules can improve the efficiency of CDN.
② Discard useless cookies and reduce bandwidth usage. We all know that HTTP will automatically bring cookies under the domain name and parent domain name every time it sends a request, but for CSS, JS and image resources, these cookies are useless, but will waste the bandwidth of visitors and server incoming bandwidth. In order to maintain the session or do other caches, our host will store a large number of cookies, so if we separate the CDN from the host domain name, we can solve this problem.
However, a new problem arises: CDN domain names are different from host domain names, and DNS resolution of CDN domain names takes extra time and increases network latency. But it wasn’t hard for our great programmer predecessor, DNS Prefetch, to make an appearance.
If you look at the HTML source code of large websites, you will find such link in the head :(here take taobao home page as an example)
This is DNS Prefetch. DNS Prefetch is a DNS preresolution technology. When browsing a web page, the browser will preresolve and cache the domain names in the web page when loading the web page. In this way, DNS resolution is not required when the browser loads the links in the web page, reducing the waiting time of users and improving user experience. DNS Prefetch is now supported by mainstream browsers, and most browsers are optimized for DNS resolution. A typical DNS resolution takes 20-120ms. Reducing the time and frequency of DNS resolution is a good optimization measure. Here is a picture of DNS Prefetch support from Can I Use it:
So feel free to use it.
2. Page rendering performance optimization
2.1. Browser Rendering Process (Webkit)
In fact, you should be familiar with the HTML rendering mechanism of the browser. The basic process is described in the diagram above. When you are getting started, your mentor or senior may tell you that we should reduce the rearrangement and redrawing of the rendering because they affect the performance of the browser. But you don’t know how it works, do you? Today we are going to introduce some of the deeper concepts in Webkit technology Insider (a book that I highly recommend you buy, at least as a front-end engineer you need to know how the browser kernel works every day).
PS: its kernel is mentioned here, by the way my browser internal rendering engine, the relationship between the interpreter and other components, because often have some teacher younger brother or front fans ask me this knowledge, can’t distinguish their relationship, I took a picture to illustrate: (this part of the content has nothing to do with this article, if you are not interested in, you can skip)
The browser’s interpreter is included in the rendering engine, the Webkit engine used by Chrome (now Blink engine) and the Webkit engine used by Safari, and the Gecko engine used by Firefox. Within the rendering engine, we also have our HTML interpreter (for constructing the DOM tree when rendering), our CSS interpreter (for composing CSS rules when rendering), and our JS interpreter. However, due to the increasingly important use of JS, more and more complicated work, so JS interpreter gradually independent, become a separate JS engine, just like the well-known V8 engine, we often contact node.js is also used by it.
2.2.DOM rendering layer andGPUHardware acceleration
If I told you that a page is made up of many, many layers, like a lasagna, can you imagine what the page would actually look like? For your imagination, I have attached a map of the Layers layer of the 3D View plugin provided by Firefox:
It is composed of multiple DOM elements and Layers. In fact, after building the Render Tree, a page is finally presented in front of us after going through the following process:
① The browser takes the DOM tree and splits it into separate rendering layers based on style
② THE CPU draws each layer into the drawing
③ Upload bitmap as texture to GPU (graphics card) drawing
(4) GPU will cache all rendering layers (if the next uploaded rendering layer does not change, GPU does not need to redraw it) and compound multiple rendering layers to form our image
As we can see from the steps above, the layout is handled by the CPU and the drawing is done by the GPU.
In chrome, plugins are also provided to check the layout of rendering layers and GPU usage :(so, we have to try out those weird plugins in chrome, and find a lot of things are magic)
Chrome Developer Tools menu → More Tools →Layers
Chrome Developer Tools menu → More Tools → Rendering
After doing this, you should see something like this in your browser:
There are too many things. Let’s talk about them in modules:
(I) The first is the small black window at the top right of the page: in fact, the prompt has been clearly stated, it shows our GPU occupancy rate, so that we can clearly know whether a large amount of redrawing has taken place on the page.
This is the tool used to display the DOM rendering Layers we just mentioned. The list on the left will list which Layers are present on the page and the details of those Layers.
This panel is in the same place as our console, so don’t lose sight of it. The first three boxes are the ones we use the most, so let me explain what they do (act as a free translator)
1 Paint flashing: After this item is selected, the elements redrawn in the page will be highlighted
②Layer Borders: Similar to our Layer, it will highlight the layers of our page with highlighted borders
③FPS meter: the small black window mentioned in (1) is opened to observe our GPU occupancy rate
You might ask me, what is the use of mentioning DOM rendering layers that have nothing to do with performance optimization? You remember I said that the GPU will cache our rendering layer, so imagine if we could extract elements that have been rearranging and redrawing a lot, and trigger a single rendering layer, that element wouldn’t be redrawing all the other elements together.
Which begs the question, under what circumstances will the render layer be triggered? Just remember:
Video elements, WebGL, Canvas, CSS3 3D, CSS filters, and elements with z-index greater than a neighboring node will trigger a new Layer. In fact, the most common method is to add the following style to an element:
transform: translateZ(0);
backface-visibility: hidden;Copy the code
This will trigger the render layer.
We use the term hardware acceleration, or GPU acceleration, to separate elements that are prone to rearranging and redrawing from those that are “static” and allow the GPU to do more of the rendering. You’ve heard this before, and now you know exactly how it works.
2.3. Rearrangement and redrawing
Now it’s time for our main act, rearranging and redrawing. First throw out the concept:
Reflow: changes in the layout of elements within the rendering layer will cause the page to be rearranged, such as changing the size of the window, deleting or adding DOM elements, and modifying CSS properties (such as width, height, padding) that affect the size of the element box.
(2) Repaint: All changes to the visual properties of elements will cause repaint.
We use the Performance section of Chrome DevTools to measure the amount of time rearranged and redrawn pages take:
(1) The blue part: TIME taken by HTML parsing and network communication
② Yellow part: time taken by JavaScript statement execution
③ The purple part: rearrangement takes time
(4) The green part: redrawing takes time
Either rearrangement or redrawing blocks the browser. To improve web page performance, the frequency and cost of rearranging and redrawing will be reduced, and rerendering will be triggered less often. As we mentioned in 2.3, rearrangement is handled by CPU, while redrawing is handled by GPU. CPU’s processing efficiency is far less than GPU’s, and rearrangement will definitely cause redrawing, while redrawing will not necessarily cause rearrangement. Therefore, in performance optimization, we should focus on reducing the occurrence of rearrangements.
Here’s a site that lists in detail which CSS properties trigger rearrangements or redraws in different rendering engines:
CSstriggers.com/
2.4. Optimization strategy
There are so many theories, but the most practical one is the solution. Everyone must be anxious about it. Be prepared for a big wave of dry goods to come:
(a) CSS property read and write separation: the browser every time to read the element style operation, must be a re rendering (rearrangement + redraw), so we use JS to read and write the element style operation, it is best to separate the two, read and write first, to avoid the situation of the use of the two. The most objective solution, which I recommend, is to not use JS to manipulate element styles.
(2) Batch manipulate element styles by switching class or using the style.csstext attribute of the element.
(iii) DOM element offline update: When performing operations on the DOM, examples, appendChild, etc., can use the Document Fragment object to perform offline operations, insert the page again after the element is “assembled”, or use display: None to hide the element and perform operations after the element is “gone”.
(4) Set unused elements to invisible: visibility: hidden, so that you can reduce the pressure of redrawing, and display the elements when necessary.
(5) compression of DOM depth, a rendering layer should not have too deep child elements, less DOM to complete the page style, more use of pseudo-elements or box-shadow instead.
(6) Specify the size of image before rendering: since img elements are inline, they will change width and height after loading the image. In severe cases, the entire page will be rearranged, so it is best to specify the size before rendering, or take it out of the document stream.
(7) Trigger the rendering layer separately for elements that may be rearranged and redrawn in a large number of pages, and use GPU to share the CPU pressure. (This strategy needs to be used with caution, considering whether there can be a predictable performance optimization at the expense of GPU usage, since having too many rendering layers on a page is an unnecessary strain on the GPU, and usually we use hardware acceleration for animation elements.)
3.JS blocking performance
JavaScript has almost established a monopoly position in website development, even if it is a simple static page, you can see the existence of JS, it can be said that without JS, the website basically bid farewell to user interaction. The problem with scripts, however, is that they can block parallel downloads of pages and increase the CPU usage of the process. What’s more, now that Node.js is so ubiquitous in front end development, we can cause a memory leak or accidentally write an infinite loop in our code that will crash our servers. In today’s era of JS has been all over the front and back end, the performance bottleneck not only stays in the impact of user experience, there will be more serious problems, the JS performance optimization work should not be underestimated.
In the process of programming, if we use the not related resources would be released after closure, or references to the chain after not empty it (for example a DOM element binding event callback, it turned out to remove the element), can create a memory leak occurs, then the CPU load, cause caton or crashing. We can use the JavaScript Profile section provided by Chrome, which is opened in the same way as the Layers section. I don’t need to say more here.
If I add a while(true){} line to the code, the CPU usage will skyrocket to an abnormal level (93.26%).
Browser powerful memory recovery mechanism in most of the time, to avoid the occurrence of this situation, even if the user the crash, he as long as the end of the relevant process (or close the browser) can solve this problem, but we want to know, the same will happen in our server, which is our node, the serious situation, It will directly cause our servers to go down and our website to crash. Most of the time, we use the JavaScript Profile to stress test our Node services. With the Node-Inspector plugin, we can more effectively detect the CPU usage of various functions during JS execution and optimize them accordingly.
(PS: So don’t use closures on the server side until you get to a certain level. On one hand, they don’t really work, we’ll have more good solutions, and on the other hand, it’s really easy to leak memory, which can have unexpected consequences.)
4. [k? : n]Load balancing
Load balancing is an extension because if you’re building your own website, or a small to medium size site, you don’t really need to worry about concurrency, but if you’re building a large site, load balancing is an integral part of the development process.
4.1.Node.js handles IO intensive requests
Today’s development process focuses on front-end and back-end separation, which is often referred to as “high cohesion and low coupling” in software engineering. You can also think of it in terms of modularity. Front-end and back-end decoupling is similar to dividing a project into two large modules, which are connected through interfaces and developed separately. What’s the good of that? I’ll take the most practical one: “asynchronous programming.” This is my own name, because I think the form of decoupling before and after is very similar to the asynchronous queue in JS. The traditional development mode is “synchronous”. The front end needs to wait for the back end to encapsulate the interface and know what data can be taken before developing. After decoupling, we only need to agree on the interface in advance, and both ends can be developed at the same time, which is not only efficient and time-saving.
As we all know, the core of Node is event-driven, and the Event Loop is used to process user requests asynchronously. Compared with traditional back-end services, each request of the user is allocated to a process for processing. . Especially vivid explanation of event-driven operation mechanism, easy to understand. What is the biggest advantage of being event-driven? Is in the high concurrency I/O, will not cause congestion, for live sites, this is crucial, we have a successful precedent — fast, fast powerful I/O high concurrency, its essence must be traced back to Node.
In fact, now the enterprise website, will build a layer of Node as the middle layer. The outline of the site is shown below:
4.2. Pm2 implementation of Node.js “Multi-process”
We all know the pros and cons of the node, share a link here, looking for a quite long time to write is detailed: www.zhihu.com/question/19… . In fact, many of the same old tricks, those who say that Node is not the weak point of the node is a single process, I tell you, we have a solution — PM2. Here’s its website: PM2.keymetrics. IO /. Node.js is a Node.js process manager that can start a Node.js service on every kernel of your computer. This means that if your computer or server has a multi-core processor, it can start multiple Node.js services, and it can automatically control load balancing. The user’s request is automatically dispatched to a low-stress server process. It sounds like a real artifact! And its functions are far more than these, here I will not make too much introduction, we know that we need to use it on the line, the method of installation is also very simple, directly use NPM to the global $NPM I Pm2 – G specific use method and related features can refer to the official website.
Here is the pM2 after startup:
4.3. Nginx sets up reverse proxy
Before you start setting up, you need to know what a reverse proxy is. If you are unfamiliar with this term, let’s start with a picture:
A proxy is what we call an intermediary. A reverse proxy for a website is a server that sits between the user and our real server. Its function is to distribute user requests to a less stressful server through polling. After hearing this sentence feels familiar, yes, when I introduce pm2 also said the same thing, the reverse proxy play a role as pm2 also realize the load balance, you should also know that now the difference between the two, the reverse proxy is the server load balancing, load balancing and pm2 is on process. You if you want to have a thorough understanding of the reverse proxy related knowledge, I recommend zhihu a post: www.zhihu.com/question/24… . But you will think, with the server is the operation and maintenance of things ah, and our front end what relationship? Indeed, in this part of the work, we only need to provide operations with a configuration document.
http { upstream video { ip_hash; server localhost:3000; } server { listen: 8080; location / { proxy_pass: http://video } } }
That is to say, when connecting with operation and maintenance, we just need to change the above few lines of code into our configured document and send it to him. The operation and maintenance guy will understand the other things. Needless to say, they are all in the wine.
But what about these lines of code? First, remember that modules in Nginx are divided into three main categories: Handler, filter, and upstream. Upstream module, which is responsible for receiving, processing and forwarding network data, is also the module we need to use in reverse proxy. Next we’ll look at what the content in the configuration code means:
4.3.1. Upstream Configuration Information
The identifier immediately following the upstream keyword is our custom project name, to which we add our configuration information with a pair of curly braces.
Ip_hash: controls whether to connect to the same server when the user accesses again
Server keyword: the address of our real server, the content here must be filled in by us, otherwise how can operation and maintenance know that you put the project on that server, also do not know that you encapsulate a layer of Node and have to listen to port 3000.
4.3.2. Server Configuration Information
Server is the basic configuration of Nginx. We need to apply our defined upstream to the server via the server.
Listen: the port on which the server listens
The location keyword performs the same function as the route we talked about earlier in the Node layer, where the user’s request is assigned to the corresponding upstream
5. Read more
Website performance and monitoring is a complex project, and there are many follow-up work. What I have mentioned before is only the tip of the iceberg. In addition to being familiar with development specifications, practical experience is also needed.
Looked through many books related to site performance in later, I still to prefer tang elder by large sites performance monitoring, analysis and optimization, the inside of the knowledge is relatively new, practical, at least I read it again after a very fruitful and sobering, I also hope that interested in the performance of the reader can look after my article to reading a book.
Site performance optimization combat — from 12.67s to 1.06s story — Tencent Web front-end IMWeb team community