4000 words long article, multi map early warning!! Flow carefully!!
Performance optimization – Diaosi front-end performance optimization, on-line one-stop dragon
Hello everyone, I come again, this chapter to bring you the content is: online and online performance optimization
The project address
- Combat Preview address
- Actual Combat Project Address
- This chapter code address
You will learn in this chapter
- The front-end needs to understand the basic knowledge of Docker
- Deploy front-end projects to local/extranet services
- GZip optimization of front-end projects
- Understand the importance of CDN
- Webpack loads on demand
- Image related optimization
- How to analyze project dependencies to facilitate targeted treatment
- How can I reduce the webpack size/speed
online
We usually develop locally, and the local environment is not exactly the same as the online environment. When many projects go online for the first time, they almost encounter problems that cannot be replicated locally, such as font and style problems, webpack compilation problems, and even local weird environment problems. Therefore, local perfect operation ≠ online perfect operation. We need to build the project and simulate online test to see if it can run perfectly. If there is any problem, we can make adjustments in time.
To prepare
In order to avoid polluting the local environment with this tutorial, it is recommended that you install a Docker, and the later operation and maintenance will be expanded according to docker.
See this Title: “Prepare for Docker”, do not be afraid to touch the front end, install one, dare to take the first step, do not learn is waiting for death “Click here to learn about Docker”
Although Tomcat Nginx, Apache, JBoss, Jetty, and so on can be used as HTTP services, this chapter starts with the most common nginx:
Let me introduce Docker in plain English
Docker is a more elegant way to do the virtual machine thing, and do better, programmable cluster management. By default, each container is isolated from each other. Of course, you can associate the container with link network, or directly use docker-compose compose. The premise of starting the container is an image, which is also similar to the image of the VIRTUAL machine. You must download the “pull” image first.
Using a docker
Maybe a lot of people have not, have not also don’t talk about how to install, go to see the official Internet cafe Chinese official website, community version download, Chinese image acceleration, Windows may want to open virtualization, Linux recommended Ubuntu, there is an article said: Please do not run Docker in centos for performance.
Let’s look at the images state
docker images
Copy the code
You can see I’ve got some mirrors “I’ve deleted nginx”
DockerHub pulls Nginx images
docker pull registry.docker-cn.com/library/nginx:latest
Copy the code
Normal docker pull nginx will do, the middle section is the Chinese image source
Ok, we successfully pulled down the Nginx image. The default storage image name is registry.docker-cn.com/library/nginx
packaging
Go to the source directory of our previous chapter and build it. The source code for the previous chapter is here
npm run build
Copy the code
Start the Docker container
docker run --name nginx -d -p 8888:80 -v /new-bee/dist:/usr/share/nginx/html registry.docker-cn.com/library/nginx
Copy the code
- Up-order some explanations “don’t say much, avoid indigestion, explore yourself”
CMD | explain |
---|---|
-d | Daemon running |
-p | Port mapping 8888:80 Docker80 port mapping to the native “host” |
-v | A directory to mount the host native “host” : docker container |
– the name | Name the container |
Test the
http://localhost:8888/# /
Copy the code
Of course, the first time you try Docker you may have more questions:
- How do you know you need to mount your home directory to: /usr/share/nginx/html?
- Can/How can I view Nginx logs?
- Can nginx be configured in a container?
- .
These little white issues will be discussed briefly in this chapter and separately in the future when doing automatic operation and maintenance. You can follow my blog
gZip
We can use Webpack to compress the script file and upload it to the HTTP server. When browsing, the browser decompresses the compressed HTTP reply packets. Compared with compression, decompression speed is very fast (as long as the data is normal and can be decompressed). So don’t worry that the time the browser takes to decompress will degrade the user experience. In fact, the amount of time it takes to decompress a browser is much smaller and more manageable than the amount of time it takes to unpack packets due to network congestion.
In the HTTP request packets sent by the browser to the server, the accept-Encoding field indicates the compression format supported by the browser, that is, the types of compressed packets (gzip and Deflate provided by the Zlib library) that can be decompressed. Server Reply The content-Encoding field is used to specify the compression mode used in the HTTP response packet sent by the client.
GZip hack webpack, Nginx
Like me, a diaosi server generally buy 1M, large resource files can not hold, a at 400K Vendar file this is very painful, not gZIp is very uncomfortable.
Open network and observe:
It’s 144K in size
Take the core vendor packaged with WebPack as an example. We find that the client requests the gZIp resource accept-encoding from the server: Gzip, deflate, but unfortunately the server does not give us the desired response-Content-encoding: gzip response, we need to check the reason.
- First of all, let’s see if webpack has been played out. See if his directory has js.gz files.
Unfortunately not, just a few zip files and a map file for location, it looks like we had a problem with packaging in the first place.
Remember this picture I sent out when we were building the project?
- Package. json project description file
Open and see which script the build command executes.
Open build.js to see what’s being done. Is vue-CLI not configuring webPack gZip for us?
Const webpackConfig = require(‘./webpack.prod.conf’);
Oh, we see that WebPack does configure gZip for us.
But it turns out that the configuration is wrapped in this judgment:
if (config.build.productionGzip) {
}
Copy the code
To track down
// Gzip off by default as many popular static hosts such as
// Surge or Netlify already gzip all static assets for you.
// Before setting to `true`, make sure to:
// npm install --save-dev compression-webpack-plugin
productionGzip: false.Copy the code
All of our doubts were solved when the developer commented on his reasons, which I’ll translate briefly:
First download dependencies:
vim package.json
"devDependencies": {
"compression-webpack-plugin": "^ 1.1.12",}Copy the code
Then productionGzip changes to true
- Without further ado, try packing:
npm run build
Copy the code
Zg file, but gZip requires server support. The server uses the accept-encoding header of the client request to determine which format of the script to return, and then decompresses the browser. We pulled down the nginx image, Nginx does not configure gZIp server compression for us by default, let’s check it out.
Access the Docker host
docker exec -it nginx /bin/bash
Copy the code
or
docker exec -it nginx "bash"
Copy the code
CMD | explain |
---|---|
exec | Enter the Docker container |
-i | -i: Runs the container in interactive mode, usually with -t. |
-t | -t: reassigns a pseudo-input terminal to the container. It is usually used together with -i. |
-it | -it = -i -t |
“Bash” or/bin/bash | /bin/bash is useful because the docker background must run a process or the container will exit |
The first thing to get into the Nginx host
Where is Nginx??
The Linux whereis command is used to find a file.
The directive looks for files in a specific directory that match the criteria. These files should belong to raw code, binaries, or help files.
This command can only be used to locate binary files, source files, and man man pages. You need to use the Locate command to locate common files.
grammar
Whereis [-bfmsu][-b < directory >... [-m < directory >...] [-s < directory >...] [file]...Copy the code
- View the nginx location
root@e0017cab245f:/# whereis nginx
nginx: /usr/sbin/nginx /usr/lib/nginx /etc/nginx /usr/share/nginx
Copy the code
- We found the root directory HTML/in /usr/share/nginx
- We found the nginx configuration file in /etc/nginx/
Ps middleware so many, who can remember, remember not to see their own line is not it?
- Take a look at the nginx configuration
It is true that gZip is not enabled, it is noted out.
We open the comments for gZip, and in case the server is lazy with CSS, we add a few lines of classic configuration in one go.
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_comp_level 6;
gzip_types application/javascript text/plain application/x-javascript text/css application/xml text/javascript application/json;
gzip_vary on;
Copy the code
The nginx configuration code is here
How to modify the internal configuration of Docker
For now, you have two options:
- Modify directly to the EXEC container to add the above gZip code snippet. Disadvantages: If you want to rebuild a container based on images, the configuration inside the container will be lost. (Unless you commit the mirror)
- Separate configuration files, once and for all.
Based on the second point, we created a directory nginx/ at the level of new-bee/ to create an nginx.conf file with the same name.
The nginx configuration code is here
- Stop the nginx container first
docker stop nginx
Copy the code
- Delete the nginx container
docker rm nginx
Copy the code
- Rebuild the nginx container
docker run --name nginx -d -p 8888:80 -v /new-bee/dist:/usr/share/nginx/html -v /nginx/nginx.conf:/etc/nginx/nginx.conf:ro registry.docker-cn.com/library/nginx
Copy the code
- See the effect
http://localhost:8888/# /
Copy the code
To prevent the browser from loading the 304 cache, clear the browser cache or put it in incognito mode
It has worked.
- Let’s see how much the size has shrunk
It’s only about 50K, which is about 2/3 of the size, and for large projects, the savings are not just 100K, but even more. Compression algorithms like WebPack or GZ, mark all the pieces that repeat a lot individually, so the more they repeat, the more they compress. This is important for cloud services where bandwidth is now more expensive than gold.
CDN
Notice that some of the ones that can use CDN I choose to use CDN, so how important is CDN for online services?
The principle of
Request to speed
Without further ado, I’ll give you a comparison map to test the location
- This is my forum
You can see that there are only a few places that are good, the rest of the place is a mess
- This is taobao
Needless to say? But also good, this part we have insufficient funds lost is also very normal, but you may also know about the meaning of CDN, the main meaning is not save the open source project server bandwidth, but the national various nodes access speed, also explains: I deployment project access speed is good, why are you so slow here, your network is bad? CDN to tell you the answer.
cookie
Let’s take the actual BBS forum as an example to check the network status:
Several advantages of using CDN
- Access to fast
- Low pressure on the server
- Webpack is small and fast (below)
The cookie on the client side is bound to the server domain name. As shown in the figure above, we need the XHR request to access the server with the cookie to obtain the corresponding permissions, but consider: Every JS, IMG and even CSS carries garbage cookies. With a large number of users, the server suffers from pain that should not belong to it, and such consumption should be avoided. We can browse any mature website and find our own CDN service. This not only optimizes the access speed of different regions in China, but also greatly reduces the overhead of the server.
Save webPack packaging size/speed
A long time ago, I experienced the problem of the company’s front-end Webpack compilation being very slow. In dev mode, we could note out routes outside the development scope, but it seemed impossible to solve the problem when the build was released. It was still not satisfactory to use Happypack multi-threaded packaging
We can exclude externals and use webpack’s webpack.dllPlugin to generate dependent libraries (which is important) with much less speed.DllPlugin essentially does the same thing as we do manually separating these third-party libraries. But for applications with a large number of packages, automation significantly accelerates productivity.
How do you analyze project dependencies
webpack-bundle-analyzer
Maven has its own set of data analysis tools, both from third parties. Here is a brief introduction to NPM data analysis tools: Webpack-bundle-analyzer, which generates a report in the browser that visually shows what is big, what needs to be optimized, and predicts the size of gZip, again using a live project as an example:
Download the build script specified in the dependencies, package.json file, as configured in the official guidelines:
"analyz": "NODE_ENV=production npm_config_report=true npm run buildProd".Copy the code
Run it:
npm run analyz
Copy the code
Effect:
Analysis:
Found the problem, the static to hightlight the documents/static file is large, rich enough to contemplate a CDN, node_modules/element – the UI hungry? Component is bigger, (I compare lazy, global import, which can be applied into which avoid global packaging problems) can be optimized, And then the boring students have nothing to do with it.
Webpack loads on demand: Everything module
Javascript packages can become very large when packaged to build applications, affecting page loads. It would be much more efficient if we could split the components corresponding to different routes into different code blocks and then load the components only when the routes are accessed. Webpack code segmentation function to achieve lazy loading of routing components.
Official details
Official said quite detailed, here is a lazy steal code, to provide you with a classic treatment, we do not put on the component, directly split the route, you can see the actual project route split route
You’ll find many comments like this:
const Blog = (a)= > import(/* webpackChunkName: "blog" */ '@/container/blog/Blog')
Copy the code
So something like this:
/* webpackChunkName: "blog" */
Copy the code
It is not written in vain, it is in collaboration with Webpack to split the project from each path, we can see the actual project load situation:
The blog. Hash. Js
It’s not written by us, it’s split by Webpack, so that a single page architecture like Vue doesn’t load a module and always loads all scripts, which greatly improves the loading speed.
The image processing
I don’t want to talk about it, but the most common ones are SVG, Base64, or cdN-like services that use fastDFS components.
base64
To put it simply, Base64 will reduce the number of HTTP requests you make. To know that XHR is not an easy light, it will bring additional processing request and processing response loss. For example, dozens of HTTP requests for facial expressions seem to be too retarding, so base64 is usually used for processing. Reduces the number of HTTP requests, but increases the size of the image itself. If you use Webpack and your emoticons are native, webpack can automatically base64 encoding for you.
The compressed image
Users to upload images can be compressed size or quality to reduce bandwidth oh, usually use GM is necessary for users to upload pictures of big lock compressed into different sizes, depending on the business load, such as head, the default will not request the original pictures, today’s headline text, use the traffic conditions will be loaded by default insets, None of this can be done on the client side and requires server side compression.
conclusion
Of course, the first step of the long march of knowledge, the future optimization of the road is vast, can probably think of such as: Lazy-load (optimizing the first screen experience), PWA (building a Web APP), server rendering (for SEO), skeleton screens (improving the user experience), the back end and server side have not been written yet.
SO – Work hard!
- Project Preview address
- Actual Combat Project Address
- Blog address
- This chapter code address
- You can click star if you are interested
The articles
Sequencing – Meaning of open source
Introduction – A look at the history of the WEB
Explore – Talk in depth about the pre – and post-separation architecture
Ready? – Did you clear the front end? Front-end infrastructure and technology introduction
Real – 5 minutes to quickly build a standard front end project skeleton
Practice – Continue to polish the front-end architecture in order to get ahead of the game
Perfect – Hand by hand to teach you how to quickly build a website layout
Final chapter – front-end optimization and online
Ps: MAC to recommend a lazy picture bed, seven cattle began to charge, MWeb can not be directly released to seven cattle, a upload, I also very helpless ah.