— This article is a modification of the previous article with the same title, updating all webpack3 content to Webpack4, and adding the automation ideas I have learned in the company recently to further improve the content of the article.
IO /topic/5b6fd…
Introductions to the 0.
As an Internet project, the most important thing is user experience. In the nationwide “Internet plus” boom, the user first has been accepted by most enterprises, especially in the era of rapid development of mobile terminals, our web pages are not only presented in users’ PC browsers, but more often, users browse our web pages through mobile products. In addition, with more and more developers joining the development teams of Web apps and Hybrid apps, the issue of performance has once again been raised to the focus of programmers. I have seen such a sentence: the experience of a website determines whether users are willing to know the functions of the website; The function of the website determines whether users will vote against the experience of the website. This is a catchphrase adapted from the Internet, but it speaks very clearly about site performance, especially in projects like websites, if a user needs more than 5s to see the page, he will not hesitate to shut it down. Superior performance optimization, as an engineer’s “kung fu”, is our a cliche topic in the development, is also a developer from entry to the senior advanced period, although we have seen a lot of standards, catch, but in the real practice, are often overwhelmed, don’t know what’s left, don’t know whether there is further optimized performance space.
There are many established metrics for website performance in the industry, but in terms of the previous er, we should pay more attention to the following metrics: white screen time, first screen time, full page time, DNS time, CPU usage. And I built a website (url: jerryonlyzrj.com/resume/, the domain name record can not be opened recently, which will be restored to normal in a few days), the first screen time is 12.67s without performance optimization, and finally reduced to 1.06s after various optimization, and CDN acceleration is not configured. In the process, I stepped on a lot of pits, but also turned over a lot of professional books, and finally decided to put these days of efforts into writing, to help front-end enthusiasts avoid detours. The update of the article may not be synchronized on the forum in real time, welcome everyone to pay attention to my Github, I will update the latest article in the corresponding project, let us ride together in the sea of code: github.com/jerryOnlyZR… .
Today, we will gradually introduce the three aspects of performance optimization, including network transmission performance, page rendering performance and JS blocking performance, systematically taking readers to experience the practical process of performance optimization.
1. Network transmission performance optimization
Before we dive into the work of optimizing network transport performance, we need to understand how browsers handle user requests, so here’s the magic map:
This is the navigation Timing monitoring indicator chart, from which we can see that after the browser receives the user request, it goes through the following stages: redirection → pull cache →DNS query → establish TCP link → initiate request → receive response → process HTML element → complete element loading. Take your time, we’ll go through the details step by step:
1.1. Browser caching
As we all know, before making a request to the server, the browser will first check whether there is the same file in the local cache. If there is, the browser will directly pull the local cache. This is similar to Redis and Memcache in the background, which both play the role of intermediate buffer.
Because the pictures on the web are too general, and I’ve read a lot of articles on caching, there’s very little systematic sorting out of the status code and when the cache is stored in memory and when the cache is cached on disk, so I drew my own flowchart for the browser cache mechanism. Use this diagram to further illustrate the browser caching mechanism.
Here we can use the Network panel in Chrome DevTools to view network transport information:
(Note that the Disable cache check at the top of the Network panel needs to be removed when we debug the cache, otherwise the browser will never pull data from the cache.)
The default browser cache is in memory, but we know that the in-memory cache will be cleared when the process ends or the browser closes, while the cache on hard disk can be retained in the long term. Most of the time, in the network panel, we will see two different states: from memory cache and from disk cache. The former refers to the cache from memory, and the latter refers to the cache from disk. The only thing that controls where the cache is stored is the Etag field we set on the server. When the browser receives the response from the server, it checks the response Header and writes the cache to hard disk if there is an Etag field.
The pull cache has two different status codes, 200 and 304, depending on whether the browser has made an authentication request to the server. The 304 status code is returned only if a validation request is made to the server to confirm that the cache has not been updated.
Here I use nginx as an example to talk about how to configure caching:
First, let’s go to the nginx configuration documentation
$ vim nginxPath/conf/nginx.conf
Insert the following two items in the configuration document:
etag on; // Enable eTAG to validate Expires 7d; // Set the cache expiration time to 7 daysCopy the code
Open our website and look at our request resources in the Network panel of Chrome DevTools. If you see the Etag and Expires fields in the response header, your cache configuration is successful.
【 special attention!! 】 Must bear in mind when we configure the cache, the browser to handle user request, if hit the cache, the browser will directly pull the local cache, not any communication with the server, that is to say, if we have updated the file on the server side, will not be a browser that can’t replace cache invalidation. Therefore, in the construction phase, we need to add MD5 hash suffixes for our static resources to avoid the synchronization problem of the front and back end files caused by resource updates.
1.2. Resource packaging and compression
The browser caching we’ve done before only works the second time a user visits our page, and resources must be optimized to achieve good performance the first time a user opens the page. We often boil down network performance optimization measures into three aspects: reducing the number of requests, reducing the volume of requested resources, and improving the network transmission rate. Now, let’s break it down one by one:
With front-end engineering in mind, we often need the help of a packaging tool to automate the packaging and compilation of live files. I recommend WebPack here. I usually use Gulp and Grunt to build Nodes.
When configuring WebPack to go live, we should pay special attention to the following points:
JS compression :(this should be familiar, not much introduction)
optimization: {
minimizer: [
new UglifyJsPlugin({
cache: true.parallel: true.sourceMap: true // set to true if you want JS source maps
}),
...Plugins
]
}
Copy the code
(2) HTML compression:
new HtmlWebpackPlugin({
template: __dirname + '/views/index.html'.// New an instance of the plugin and pass in the relevant parameters
filename: '.. /index.html'.minify: {
removeComments: true.collapseWhitespace: true.removeRedundantAttributes: true.useShortDoctype: true.removeEmptyAttributes: true.removeStyleLinkTypeAttributes: true.keepClosingSlash: true.minifyJS: true.minifyCSS: true.minifyURLs: true,},chunksSortMode: 'dependency'
})
Copy the code
When we use htML-webpack-plugin to automatically inject JS and package HTML files with CSS, we rarely add configuration items to them. Here I give an example, you can copy it directly. In Webpack5, the functionality of the HTMl-Webpack-plugin will be integrated into webpack like the common-chunk-plugin, so we don’t need to install additional plug-ins.
PS: Here’s a trick: when we write the SRC or href attributes of HTML elements, we can omit the protocol part, which is also easy to save resources. (although the purpose itself is to unify all protocols within the station)
③ Extraction of public resources:
splitChunks: {
cacheGroups: {
vendor: { // Remove third-party plugins
test: /node_modules/.// specify a third-party package under node_modules
chunks: 'initial'.name: 'common/vendor'.// The packaged file name, any name
priority: 10 // Set priority to prevent and custom public code from being overridden when extracting without packaging
},
utils: { // Extract custom public code
test: /\.js$/,
chunks: 'initial'.name: 'common/utils'.minSize: 0 // Generate a new package whenever it exceeds 0 bytes}}}Copy the code
④ Extract the CSS and compress it:
When using WebPack, we usually import CSS files as modules (the idea of WebPack is that everything is a module), but when we go live, we need to extract and compress these CSS files. These seemingly complicated processes only require a few simple lines of configuration:
(PS: We need to use the mini-CSs-extract-plugin, so you have to NPM install)
const MiniCssExtractPlugin = require('mini-css-extract-plugin')
module: {
rules: [..., {
test: /\.css$/,
exclude: /node_modules/,
use: [
_mode === 'development' ? 'style-loader' : MiniCssExtractPlugin.loader, {
loader: 'css-loader'.options: {
importLoaders: 1}}, {loader: 'postcss-loader'.options: {
ident: 'postcss'}}]}Copy the code
I configured the preprocessor postCSS here, but I extracted the configuration into a separate file postcss.config.js, where CSsnano is a great CSS optimization plugin.
⑤ Change webpack development environment to production environment:
When using WebPack to package a project, it often introduces some debugging code for debugging purposes, which we don’t need when we go live.
devtool: 'false'
Copy the code
If you can complete the configuration of webpack online according to the above six points, you can basically compress the volume of file resources to the extreme, if there are omissions, we also hope you can supplement.
Finally, we should also enable Gzip transfer compression on the server, which reduces the size of our textlike files to a quarter of their original size. The effect is immediate. Again, switch to our nginx configuration file and add the following two configuration items:
gzip on;
gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php application/vnd.ms-fontobject font/ttf font/opentype font/x-woff image/svg+xml;
Copy the code
【 special attention!! 】 Do not Gzip image files! Do not Gzip image files! Do not Gzip image files! I will tell you only counterproductive, as for the specific reason, also have to consider the server CPU usage in the process of compression and compression ratio index, to compress images not only takes up a lot of resources background, compression effect is not significant, can be said to be “more harm than good”, so please remove the images of related items in gzip_types. We will introduce the processing of images in more detail next.
1.3. Image resource optimization
Just now we introduced the resource packaging compression, just stayed at the code level, but in our actual development, the real use of network transmission resources, not these files, but the image, if you optimize the image, you can immediately see the obvious effect.
1.3.1. Do not zoom images in HTML
Many developers may have such illusion (in fact, I was once like this), we will in order to facilitate in a 200 ✖ 200 pictures of the container directly using a picture of a 400 ✖ 400, we thought it would make the user feel even more clear picture, actually otherwise, in normal display, the user will not feel after scaling a larger version of more clear, But all of this leads to a web page speed down, and a waste of bandwidth. You may not know that a 200KB image and a 2M image transfer time will be 200m and 12s difference (personally experienced, deeply affected (┬ _ ┬)). So, when you need to use large images, have a lot of large images on the server, try to fix the image size.
1.3.2. Using CSS Sprite
Sprite diagram concept you must have heard a lot in life, in fact, Sprite diagram is a significant use of reducing the number of requests. And strangely enough, when multiple images are placed together, the total volume is smaller than the sum of all the previous images (try it yourself). Here are a Sprite figure automatic generation tool: www.toptal.com/developers/… (Picture from official website home page)
Once you add the relevant resource file, it will automatically generate the Sprite image and the corresponding CSS style for you.
In fact, we have a more automatic method in the project, is a Sprite image generation plug-in Webpack-Spritesmith. First of all, a brief introduction to the idea of using plug-ins to generate Sprite images:
First, we will place the small ICONS we need in a folder for easy management:
(The @2x image here is an image resource for retina 2x screen adaptation. Webpack-spritesmith has configuration items specifically for retina 2X screen adaptation, which will be covered later)
Then, we need the plugin to read all the image resource files in this folder, generate a Sprite image with the folder name as the image name to the specified location, and output the CSS file that can use these Sprite images correctly.
Today, webpack – spritesmith this plug-in can achieve all we want, first presented the content of the configuration: (you may refer to specific webpack – spritesmith official document: www.npmjs.com/package/web.)
After executing Webpack, the results of the above two images will be generated in the development directory, and we can look at common.css:
As you can see, all the image resources we put in the Common folder automatically generate the corresponding styles. We don’t need to do this manually. The ‘ ‘Webpack-spritesmith’ plugin already does this for us!
1.3.3. Using font ICONS (Iconfont)
Whether it is a compressed picture, or a Sprite picture, or a picture, as long as it is a picture, it will still occupy a lot of network transmission resources. But with the advent of font ICONS, front-end developers see another magical world.
My favorite is Ali Vector Icon library (www.iconfont.cn/), which has a lot of vector images…
Vector graphics can do a lot of the things that images can do, and it’s just inserting characters and CSS styles into HTML. It’s not an order of magnitude of resource use compared to image requests. If you have small ICONS in your project, use vector graphics.
But what if we’re working on a corporate or team project that uses a lot of custom font ICONS and cute design girls just throw you copies of.SVG images?
In fact, it is very simple, Ali vector icon library provides the function of uploading local SVG resources, here is another recommended website – IComoon. Icomoon also provides the ability to automatically convert SVG images to CSS styles. (Image from icomoon home page)
We can click the “Import Icons” button to Import our local SVG resources, then select them, and then we leave it to icomoon to generate CSS. The specific operation is similar to ali vector icon library.
1.3.4. Using WebP
WebP is an image format developed by Google to speed up image loading. Image compression volume is only about 2/3 of JPEG, and can save a lot of server bandwidth resources and data space. Well-known sites like Facebook and Ebay are already testing and using the WebP format.
We can use the Linux command line tool provided by the official website to carry out WebP coding for the pictures in the project, and we can also use our online service, here I recommend Foropaiyun (www.upyun.com/webp). But in practice…
1.4. Network transmission performance detection tool — Page Speed
In addition to the Network section, Chrome also has a plugin for monitoring network performance called Page Speed, which is featured on the cover of this article (because I think it’s perfect). To install it, go through the following steps to find it in Chrome DevTools: Chrome Menu → More tools → Extensions → Chrome Web Store → Search for Pagespeed and go.
(PS: To use the Chrome App Store, you need to climb the wall. I won’t say how to climb the wall.)
This is how Page Speed works:
We just need to open the webpage to test, and then click the Start Analyzing button in Page Speed, it will automatically help us test the network transmission performance, this is my website test result:
The best thing about Page Speed is that it gives you complete advice on how to test your site’s performance bottlenecks, and you can optimize it accordingly. •́⌄•́ danjun)૭✧, Page Speed Score means your performance Score and 100/100 means there is no more to improve.
After optimization, use the Network section of Chorme DevTools to measure the white screen time and the first screen time of our web pages. Is it a big improvement?
1.5. Use the CDN
Last but not least,
No matter how good the performance optimization example is, it can only reach the extreme under the support of CDN.
If we use the command $traceroute targetIp in Linux or batch > tracert targetIp in Windows, we can locate all the routers that pass between the user and the target computer. It goes without saying that the farther the distance between the user and the server, The more routers that pass through, the higher the latency. One of the purposes of using CDN is to solve this problem. CDN can also share IDC pressure.
Of course, with our individual financial strength (unless you are Wang Sicong) is certainly not able to build a CDN, but we can use the services provided by major enterprises, such as Tencent cloud, configuration is also very simple, here please go to the deliberation.
2. Page rendering performance optimization
2.1. Browser Rendering Process (Webkit)
In fact, you should be familiar with the HTML rendering mechanism of the browser. The basic process is described in the diagram above. When you are getting started, your mentor or senior may tell you that we should reduce the rearrangement and redrawing of the rendering because they affect the performance of the browser. But you don’t know how it works, do you? Today we are going to introduce some of the deeper concepts in Webkit technology Insider (a book that I highly recommend you buy, at least as a front-end engineer you need to know how the browser kernel works every day).
PS: its kernel is mentioned here, by the way my browser internal rendering engine, the relationship between the interpreter and other components, because often have some teacher younger brother or front fans ask me this knowledge, can’t distinguish their relationship, I took a picture to illustrate: (if you are not interested in the, can skip)
The browser’s interpreter is included in the rendering engine, the Webkit engine used by Chrome (now Blink engine) and the Webkit engine used by Safari, and the Gecko engine used by Firefox. Within the rendering engine, we also have our HTML interpreter (for constructing the DOM tree when rendering), our CSS interpreter (for composing CSS rules when rendering), and our JS interpreter. However, due to the increasingly important use of JS, more and more complicated work, so JS interpreter gradually independent, become a separate JS engine, just like the well-known V8 engine, we often contact node.js is also used by it.
2.2.DOM rendering layer and GPU hardware acceleration
If I told you that a page is made up of many, many layers, like a lasagna, can you imagine what the page would actually look like? For your imagination, I have attached a layer diagram of the previous Firefox 3D View plugin:
It is composed of multiple DOM elements and Layers. In fact, after building the Render tree, a page is finally presented in front of us after going through the following process:
① The browser takes the DOM tree and splits it into separate rendering layers based on style
② THE CPU draws each layer into the drawing
③ Upload bitmap as texture to GPU (graphics card) drawing
(4) GPU will cache all rendering layers (if the next uploaded rendering layer does not change, GPU does not need to redraw it) and compound multiple rendering layers to form our image
As we can see from the steps above, the layout is handled by the CPU and the drawing is done by the GPU.
In chrome, there are plugins to check the layout of rendering layers and GPU usage :(so we have to try out some of chrome’s weird plugins, and find a lot of things are magic)
Chrome Developer Tools menu → More Tools →Layers
Chrome Developer Tools menu → More Tools → Rendering
After doing this, you should see something like this in your browser:
There are too many things. Let’s talk about them in modules:
(I) The first is the small black window at the top right of the page: in fact, the prompt has been clearly stated, it shows our GPU occupancy rate, so that we can clearly know whether a large amount of redrawing has taken place on the page.
This is the tool used to display the DOM rendering Layers we just mentioned. The list on the left will list which Layers are present on the page and the details of those Layers.
This panel is in the same place as our console, so don’t lose sight of it. The first three boxes are the ones we use the most, so let me explain what they do (act as a free translator)
1 Paint flashing: After this item is selected, the elements redrawn in the page will be highlighted
②Layer Borders: Similar to our Layer, it will highlight the layers of our page with highlighted borders
③FPS meter: the small black window mentioned in (1) is opened to observe our GPU occupancy rate
You might ask me, what is the use of mentioning DOM rendering layers that have nothing to do with performance optimization? You remember I said that the GPU will cache our rendering layer, so imagine if we could extract elements that have been rearranging and redrawing a lot, and trigger a single rendering layer, that element wouldn’t be redrawing all the other elements together.
Which begs the question, under what circumstances will the render layer be triggered? Just remember:
Video elements, WebGL, Canvas, CSS3 3D, CSS filters, and elements with z-index greater than a neighboring node will trigger a new Layer. In fact, the most common method is to add the following style to an element:
transform: translateZ(0);
backface-visibility: hidden;
Copy the code
This will trigger the render layer (^__^).
We use the term hardware acceleration, or GPU acceleration, to separate elements that are prone to rearranging and redrawing from those that are “static” and allow the GPU to do more of the rendering. You’ve heard this before, and now you know exactly how it works.
2.3. Rearrangement and redrawing
Now it’s time for our main act, rearranging and redrawing. First throw out the concept:
Reflow: changes in the layout of elements within the rendering layer will cause the page to be rearranged, such as changing the size of the window, deleting or adding DOM elements, and modifying CSS properties (such as width, height, padding) that affect the size of the element box.
(2) Repaint: All changes to the visual properties of elements will cause repaint.
We use the Performance section of Chrome DevTools to measure the amount of time rearranged and redrawn pages take:
(1) The blue part: TIME taken by HTML parsing and network communication
② Yellow part: time taken by JavaScript statement execution
③ The purple part: rearrangement takes time
(4) The green part: redrawing takes time
Either rearrangement or redrawing blocks the browser. To improve web page performance, the frequency and cost of rearranging and redrawing will be reduced, and rerendering will be triggered less often. As we mentioned in 2.3, rearrangement is handled by CPU, while redrawing is handled by GPU. CPU’s processing efficiency is far less than GPU’s, and rearrangement will definitely cause redrawing, while redrawing will not necessarily cause rearrangement. Therefore, in performance optimization, we should focus on reducing the occurrence of rearrangements.
Here’s a site that lists in detail which CSS properties trigger rearrangements or redraws in different rendering engines:
CSstriggers.com/
2.4. Optimization strategy
There are so many theories, but the most practical one is the solution. Everyone must be anxious about it. Be prepared for a big wave of dry goods to come:
(a) CSS property read and write separation: the browser did not read the element style operation, must be a re-rendering (rearrangement + redraw), so we use JS to read and write the element style operation, it is best to separate the two, read and write first, to avoid the situation of the use of the two. The most objective solution, which I recommend, is to not use JS to manipulate element styles.
(2) Batch manipulate element styles by toggling class or style.csstext attributes
(iii) DOM element offline update: When performing operations on the DOM, such as innerHTML and appendChild, you can use the Document Fragment object to perform offline operations, insert the page again with the element “assembled”, or hide the element with display: None. Operations are performed after the element “disappears”.
(4) Set unused elements to invisible: visibility: hidden, so that you can reduce the pressure of redrawing, and display the elements when necessary.
(5) compression of DOM depth, a rendering layer should not have too deep child elements, less DOM to complete the page style, more use of pseudo-elements or box-shadow instead.
(6) Specify the size of image before rendering: since img elements are inline, they will change width and height after loading the image. In severe cases, the entire page will be rearranged, so it is best to specify the size before rendering, or take it out of the document stream.
(7) Trigger the rendering layer separately for elements that may be rearranged and redrawn in a large number of pages, and use GPU to share the CPU pressure. (This strategy needs to be used with caution, considering if sacrificing GPU usage can result in predictable performance optimization, since having too many rendering layers on a page is an unnecessary strain on the GPU, and usually we use hardware acceleration for animation elements.)
3.JS blocking performance
JavaScript has become so dominant in web development that you can see JS on even a simple static page. Without JS, there would be almost no user interaction. The problem with scripts, however, is that they can block parallel downloads of pages and increase the CPU usage of the process. What’s more, now that Node.js is so ubiquitous in front end development, we can cause a memory leak, or accidentally write an infinite loop in our code, causing our servers to crash. In today’s era of JS has been all over the front and back end, the performance bottleneck not only stays in the impact of user experience, there will be more serious problems, the JS performance optimization work should not be underestimated.
In the process of programming, if we use the not related resources would be released after closure, or references to the chain after not empty it (for example a DOM element binding event callback, it turned out to remove the element), can create a memory leak occurs, then the CPU load, cause caton or crashing. We can use the JavaScript Profile section provided by Chrome, which is opened in the same way as the Layers section. I don’t need to say more here.
If I add a while(true){} line to the code, it will definitely increase the CPU usage to an abnormal index (93.26%).
Browser powerful memory recovery mechanism in most of the time, to avoid the occurrence of this situation, even if the user the crash, he as long as the end of the relevant process (or close the browser) can solve this problem, but we want to know, the same will happen in our server, which is our node, the serious situation, It will directly cause our servers to go down and our website to crash. Most of the time, we use the JavaScript Profile section to stress test our Node services. With the Node-Inspector plugin, we can more effectively detect the CPU usage of various functions during JS execution and optimize them accordingly.
(PS: Don’t use closures on the server side until you’ve reached a certain level of training. On one hand, it’s really useless, we’ll have more excellent solutions, and on the other hand, it’s really easy to leak memory, causing unexpected consequences.)
4. [extension] Load balancing
Load balancing is an extension because if you’re building your own website, or a small to medium size site, you don’t really need to worry about concurrency, but if you’re building a large site, load balancing is an integral part of the development process.
4.1.Node.js handles IO intensive requests
Today’s development process focuses on front-end and back-end separation, which is often referred to as “high cohesion and low coupling” in software engineering. You can also think of it in terms of modularity. Front-end and back-end decoupling is similar to dividing a project into two large modules, which are connected through interfaces and developed separately. What’s the good of that? I’ll take the most practical one: “asynchronous programming.” This is my own name, because I think the form of decoupling before and after is very similar to the asynchronous queue in JS. The traditional development mode is “synchronous”. The front end needs to wait for the back end to encapsulate the interface and know what data can be taken before developing. After decoupling, we only need to agree on the interface in advance, and both ends can be developed at the same time, which is not only efficient and time-saving.
As we all know, the core of Node is event-driven, which uses loop to process user requests asynchronously. Compared with traditional back-end services, each request of the user is allocated to an asynchronous queue for processing. . Especially vivid explanation of event-driven operation mechanism, easy to understand. What is the biggest advantage of being event-driven? Is in the high concurrency I/O, will not cause congestion, for live sites, this is crucial, we have a successful precedent — fast, fast powerful I/O high concurrency, its essence must be traced back to Node.
In fact, now the enterprise website, will build a layer of Node as the middle layer. The outline of the site is shown below:
4.2. Pm2 implementation of Node.js “Multi-threading”
We all know the pros and cons of the node, share a link here, looking for a quite long time to write is detailed: www.zhihu.com/question/19… . It’s the same old story. Those who say Node doesn’t work are pointing to the fact that node is single-threaded. We have a solution — PM2. Here’s its website: PM2.keymetrics. IO /. Node.js is a Node.js process manager that can start a Node.js service on every kernel of your computer. This means that if your computer or server has a multi-core processor, it can start multiple Node.js services, and it can automatically control load balancing. The user’s request is automatically dispatched to a low-stress server process. It sounds like a real artifact! And its functions are far more than these, here I will not make too much introduction, we know that we need to use it on the line, the method of installation is also very simple, directly use NPM to the global $NPM I Pm2 – G specific use method and related features can refer to the official website. Json file, which is the pM2 startup configuration file, can be configured by ourselves. For details, refer to github source code. When running, we just need to enter the command $pm2 start pm2.
Here is the pM2 after startup:
4.3. Nginx sets up reverse proxy
Before you start setting up, you need to know what a reverse proxy is. If you are unfamiliar with this term, let’s start with a picture:
A proxy is what we call an intermediary. A reverse proxy for a website is a server that sits between the user and our real server. Its function is to distribute user requests to a less stressful server through polling. After hearing this sentence feels familiar, yes, when I introduce pm2 also said the same thing, the reverse proxy play a role as pm2 also realize the load balance, you should also know that now the difference between the two, the reverse proxy is the server load balancing, load balancing and pm2 is on process. You if you want to have a thorough understanding of the reverse proxy related knowledge, I recommend zhihu a post: www.zhihu.com/question/24… . But you will think, with the server is the operation and maintenance of things ah, and our front end what relationship? Indeed, in this part of the work, we only need to provide operations with a configuration document.
http {
upstream video {
ip_hash;
server localhost:3000;
}
server {
listen: 8080;
location / {
proxy_pass: http://video
}
}
}
Copy the code
That is to say, when connecting with operation and maintenance, we just need to change the above few lines of code into our configured document and send it to him. The operation and maintenance guy will understand the other things. Needless to say, they are all in the wine.
But what about these lines of code? First, remember that modules in Nginx are divided into three main categories: Handler, filter, and upstream. Upstream module, which is responsible for receiving, processing and forwarding network data, is also the module we need to use in reverse proxy. Next we’ll look at what the content in the configuration code means
Upstream configuration:
The identifier immediately following the upstream keyword is our custom project name, to which we add our configuration information with a pair of curly braces.
Ip_hash: controls whether to connect to the same server when the user accesses again
Server keyword: the address of our real server, the content here must be filled in by us, otherwise how can operation and maintenance know that you put the project on that server, also do not know that you encapsulate a layer of Node and have to listen to port 3000.
4.3.2. Server Configuration Information
Server is the basic configuration of Nginx. We need to apply our defined upstream to the server via the server.
Listen: the port on which the server listens
The location keyword performs the same function as the route we talked about earlier in the Node layer, where the user’s request is assigned to the corresponding upstream
5. Read more
The performance and monitoring of a website is a complex work, and there are many, many follow-up work. What I have mentioned before is only the tip of the iceberg. In addition to being familiar with development specifications, it also requires the accumulation of practical experience.
Looked through many books related to site performance in later, I still to prefer tang elder by large sites performance monitoring, analysis and optimization, the inside of the knowledge is relatively new, practical, at least I read it again after a very fruitful and sobering, I also hope that interested in the performance of the reader can look after my article to reading a book.
Here the author also suggests that everyone at ordinary times something has nothing to do can go to see a few yahoo military rules, although it is a platitude, but pearls. It would be nice if you could remember that portal:
www.cnblogs.com/xianyulaodi…
Hard wide
Our team is hiring!! Welcome to join bytedance’s commercial realization front end team. The technical construction we are doing includes: Front-end engineering system upgrade, team Node infrastructure construction, the front-end one-click CI publishing tools, support of service components, the internationalization of the front-end general micro front-end solutions, heavy reliance on business system reform, visual page structures, systems, business intelligence (BI, front end test automation and so on, has the nearly hundred people of the north grand front team, There will be areas of interest for you, if you want to join us, please click on my inpush channel:
✨ ✨ ✨ ✨ ✨
Push portal (for prime recruiting season, click here to get Bytedance push opportunities!)
Exclusive entrance for Bytedance enrollment (bytedance enrollment promotion code: HTZYCHN, post link: Join Bytedance – Recruitment)
✨ ✨ ✨ ✨ ✨
If you want to know about our department’s daily life (dou) and work environment (Li), you can also click here