Performance optimization considerations
When it comes to performance tuning, we need to know why, where, and how. The answer to the first question, why optimize performance, is simple: for a medium to large Web application, the concurrent traffic is very large, if the performance is overwhelmed, the application will give users a very bad experience. Therefore, performance optimization must be considered in medium to large projects. As for small Web applications, using an optimized development solution still gives users a good user experience. In addition to the user experience, the reduced server load and network requests that result from improved performance are also valuable. For the second question, where do we need to optimize performance, or how do we think about the optimization points, how do we clarify the optimization goals, this is a very critical step. Here, I use examples from the book of Meditation to illustrate. Consider the classic interview question: What happens after you type in the URL? Here are the steps:
- The DNS
- Establishing a TCP Connection
- Sending an HTTP request
- The server side processes the request and issues an HTTP response
- The browser gets the response and parses the rendered page
Based on these five processes for Web services, we can choose optimization methods that are targeted, and these five aspects are considered when we do performance optimization. Specifically, how can THE time taken by DNS resolution be reduced? How can I improve the speed of TCP connection three-way handshake? How can HTTP requests and responses improve performance? How to optimize performance on the browser side? In a nutshell, optimization involves the network layer, the rendering layer, the specific application (lazy loading, throttling, anti-shaking) and monitoring tools.
Specific methods of performance optimization
Network performance optimization
As mentioned in the previous section, the first four processes are network problems. This section mainly discusses the performance improvement scheme in network aspects. For DNS resolution and TCP connection establishment, we can do very limited, DNS resolution can use browser DNS cache and DNS prefetch, TCP connection can use long connection, pre-connection and so on. This is already configured in the basic network environment, so we can skip it for the moment. Where we can really play in networking is in HTTP requests and responses. We must follow the following two points:
- Reduce the number of HTTP requests
- Reduce the time spent on a single request If the request and response resources are relatively small, performance will improve, so how to reduce the resources? By the way, our widely used Webpack is an optimization point, because webPack takes too long to pack and the resources are still large. How to improve?
Webpack optimization
- Don’t make babel-loader do too much, and use include or exclude to avoid unnecessary translations. In addition, caching is enabled to cache translation results to the file system, greatly improving loader efficiency.
- Third party library handling, select DLLPlugin, the entire plug-in can package the third party library into a single dependency file, which will be repackaged only when iteration of the dependency itself occurs
- HappyPack changes single-threaded Webpack to multi-process, greatly improving packaging efficiency
- According to the need to load
- Remove redundant code, typical application tree-shaking
Gzip compression of HTTP
HTTP compression can be implemented by adding Gzip to the request header. HTTP compression is the process of re-encoding HTTP content for the purpose of reducing its size, as defined in the explanatory book. Large projects, projects with repeated statements, can see significant performance gains from using GZIP, while small, mini projects don’t. Gzip compression takes place on the server side, so it essentially trades the time cost and CPU cost of server-side compression for the time cost of HTTP transmission. So there are tradeoffs to be made.
Image optimization
In the current Web world, pictures, audio and video are gradually becoming the mainstream form of data. Among them, images account for the largest proportion, and each image occupies a significant amount of storage space, so image optimization is critical to improving performance in the process of HTTP request response. Picture optimization, the simplest is to figure out the use of different formats of the picture scene.
- JPG picture: lossy compression, light color rich, do not support transparency, suitable for home page large picture.
- PNG image: lossless compression, high quality, transparent support, the disadvantage is large size. Png-8 and PnG-24. -8 and -24 are binary digits. -8 indicates that a maximum of 256 colors are supported, and -24 indicates that a maximum of 1024 colors are supported. We can use minus 8 if we can use minus 8. Small LOGO images are best used in PNG format for better color representation.
- WEBP Images: Google’s latest image format, which claims to solve all formatting pain points, does the best, but it’s very poorly compatible and only Chrome supports it.
- SVG images: Scalable vector images, XML-based text files, small size, no distortion, good compatibility. SVG images are text images that are not pixel-based and will not distort regardless of how they are scaled.
- CSS Sprite: Combine multiple small ICONS and background images into one image. Causes one image file to replace multiple small icon files.
- Base64: Exists as a complement to Sprite. Images are encoded directly, so you can write the encoded results directly to HTML or CSS, reducing the number and time of HTTP requests.
Browser caching mechanism
The purpose of caching is to reduce the number of HTTP requests and thus improve performance. The browser cache mechanism includes four aspects, which can be divided into different priorities:
- Memory Cache
- Service Worker Cache
- HTTP Cache (most critical, most commonly used)
- Push Cache(new feature of HTTP2.0)
HTTP Cache
HTTP caching is all set up on the server side. When the client sends a request for the first time, the server returns the corresponding parameters in the response header. The client caches the data in the response header. The HTTP cache can be classified into strong cache and negotiated cache. The strong cache has a higher priority than the negotiated cache. The strong Cache is controlled by using Expires and cache-Control in the HTTP header. If the strong Cache is hit, the status code 200 is returned and the resource is directly obtained from the Cache without communicating with the server. An outgrowth of HTTP1.0, Expires uses a timestamp, which means that when the server writes an expiration time in the response header of the first request, if the local time is less than the expiration time, the resource is fetched directly from the cache, otherwise it needs to revisit the server side. The disadvantages of this approach are the uncertainty of the local time. The local time and server-side time may be inconsistent, and the local time may have been changed, which can cause strong cache invalidation. In HTTP1.1, cache-Control solves this problem perfectly. It is a perfect replacement for Expires. It uses a max-age field to Control the expiration date of a resource. Preferred use.
Basic concepts: public and private. These two modifiers limit whether a resource can be obtained by a proxy server (CDN). No -store and no-cache. No -store directly accesses the server to inquire whether the resource has expired and does not query the browser. No -cache directly does not cache and only sends requests to the server and receives responses.
Negotiation caching, as its name implies, requires clients and servers to “negotiate,” that is, communicate. The browser needs to ask the server if the resource has not been modified, return the status code 304, and redirect to the browser cache. Otherwise, you need to make a request to the server and wait for the response. The negotiated cache relies on last-modified/if-modified-since and ETag/ if-none-match, where last-Modified is a timestamp that is returned to the client with the response header, Each subsequent request from the client carries the if-Modified-since field, whose value is the last-Modified value returned on the first request. Every request received by the server compares the timestamp with the last modification time of the resource. If there is no change, the server returns 304, causing the browser to redirect to the cache, and sends the latest resource as a response. There are some problems with this timestamp, such as editing the file but not modifying anything, which can be mistaken for a change in the resource and thus cannot be used by the cache. This means that the server cannot correctly perceive changes to the file. To solve this problem, ETag comes into play. This is a unique string ID generated by the server, encoded according to the resource content, and the ETag changes naturally if the content changes. It is also returned to the client in the header of the first response. In subsequent requests, a field called if-none-match is attached to the request header. This field is the ETag value returned for the first time. Otherwise, the updated resource is sent. ETag has a higher priority and is more accurate, but it has the disadvantage of impacting server performance.
MemoryCache
An in-memory cache, as its name implies, has the highest priority and fastest access, but it is also the most short-lived, because this type of cache is symbiotic with the renderer process and is removed from memory when the renderer process terminates. Memory space is limited and very valuable, only small files, small images can be put into the cache.
Service Worker Cache
A Service Worker is a thread that is independent of the main JS thread. Its independence prevents it from manipulating the DOM and thus does not affect web page performance. Service worker-based offline caching is another type of browser caching. Note that the Service worker is based on the HTTPS protocol.
Push Cache
Push Cache is a Cache proposed in HTTP2, and is the lowest priority of the browser Cache. It exists only for the current session and is removed when the session terminates.
The local store
Local storage, like browser caching, can also reduce the number of HTTP requests and improve performance. Local storage includes cookies, Web storage, and IndexedDB, but don’t worry, we’ll discuss them one by one.
Cookie
A Cookie is a small status-maintaining file that is stored in the browser, generated by the server on the client’s first access, returned in the response header, and attached to each subsequent request to the server. As we all know, HTTP is stateless protocol before HTTP2.0, Cookie mechanism is to solve HTTP this pain point, Cookie is used in the client to maintain the state of the scheme, the user needs to open the Cookie support, if the user disables Cookie, then the Cookie will fail. In fact, in essence, cookies only store state information, and the corresponding mechanism should be session.
The content of cookies mainly includes: name, value, expiration time, path and domain. The name is the cookie’s identifying information, and the path and field together form the cookie’s scope. If the expiration time is not set, it indicates that the cookie life cycle is only during the session. Once the session is closed, the cookie will disappear, and the session cookie will not be saved in the hard disk, but in the memory. If the expiration time is set, cookies are stored on the browser’s hard disk and can be shared between different browser processes.
The disadvantage of cookies is their small size, only 4KB. In addition, excessive cookies will bring a very large performance loss, this is because cookies closely follow the domain name, all requests under the same domain name will carry cookies, which causes cookies to carry unnecessary information in a large number of HTTP requests to occupy resources, in order to solve this problem, Web storage was born.
web storage
Web Storage is a special mechanism proposed by HTML5 for browser storage. It has two types: Local storage and Session storage. A Local storage is a permanent local storage that persists as long as the user does not delete it, while a session storage is a permanent local storage that lasts only during a session. The Web storage has a large storage capacity and does not communicate with the server. Web storage can be seen as a complement to cookies, but is only suitable for simple data structures because it also relies on key-value pairs.
IndexedDB
Frankly speaking, the author is still very strange to this part of the content, no contact, here is only the current understanding of the record. IndexedDB is a non-relational database that runs on a browser. Databases allow us to manipulate complex data structures and large amounts of data, so indexDB can be seen as a powerful update to Local Storage.
CDN cache
Content Delivery Network (CDN) is a group of servers distributed in different regions. These servers store copies of data. Therefore, servers can allocate servers according to the distance between servers and users, thus improving the response speed. CDN itself is also a classic front-end interview question. At the heart of A CDN are two things: caching and backsourcing. The so-called cache means that we copy a copy of the resources in the master server to the CDN server for storage for subsequent users. The back source is also well understood, meaning that if the CDN server does not have the resource, the server will request the resource from the primary server.
The CDN is mainly used for static resources, while the root server is used for dynamic operation pages. Static resources are images, audio and video files, JS/CSS files that do not change. The domain name of CDN must be different from the main domain name, so as to eliminate the unnecessary appearance of cookies.
Render layer optimization (browser-side optimization)
Server side rendering
Server side rendering (SSR) is relative to client side rendering. We are more familiar with client side rendering, but server side rendering has been growing rapidly and gaining popularity in recent years. For the familiar client-side rendering, the server will send the static files required for rendering (such as HTML/CSS/JS, etc.) to the client, and the client will update the corresponding DOM according to the JS code, so as to render the results. One of its features is that the content rendered on the page is not found in the HTML source file. For server rendering, the server will pre-render the page into HTML and then return it to the client. The client can directly render the result, so it is the opposite of client rendering. The feature is that the content presented on the page cannot be found in the HTML source file. Server-side rendering mainly solves two problems: one is to make it easy for search engines to search web content; the other is to make users do not need to wait for resource loading and JS running, which greatly improves user experience. However, server-side rendering is not a panacea, because the pressure on the server side will be very heavy, and there are not many practical cases in reality. After all, the number of users’ browsers is far more than the number of servers.
CSS optimization
CSS stylesheets need to build the CSSOM tree while rendering the style, which can take some time, so this time savings can also lead to performance gains. One of the most common misunderstandings here is that CSS stylesheets parse from right to left, so be careful with wildcards. We should select only the elements that need to be selected, and try to use class or ID selectors instead of element selectors. If you can use inheritance, use inheritance in preference to avoid duplicate definitions. The last step is to reduce nesting. Excessive nesting seriously affects the speed of CSS parsing. All of the above can speed up CSS parsing.
Avoid blocking
As we all know, CSS and JS will block the rendering process. DOM and CSSDOM will cooperate to generate the rendering tree only after THE CSS has been parsed to generate the CSSDOM tree. Therefore, regardless of whether the DOM has been generated, the rendering can be started until the CSSDOM is also built, which is CSS blocking. Start parsing CSS files with the style tag, so in order to complete CSS parsing as quickly as possible, we should put style in front of the HTML file, so to sum up CSS is a resource that blocks rendering, we should load it as early as possible, and download it to the client as soon as possible. In order to load as soon as possible, we can load CSS files in the head. In order to download to the client as soon as possible, we can use CDN content distribution network to store CSS and other static resources.
The main function of JS is to make static web pages achieve dynamic interaction function. It is not a necessary resource when rendering for the first time, because static web pages can also be presented to the user. When JS implements dynamic interaction, it is inevitable to operate DOM, so the execution of JS will block DOM. JS blocking is essentially saying that the JS engine takes over the rendering engine’s work when it encounters A JS file, making rendering blocked. The reason why JS blocks rendering is mainly because JS modifies the DOM. If you don’t want JS to block rendering, you can use Async and defer. Async mode: JS loading is asynchronous and will be executed immediately after loading. Defer pattern: JS loads asynchronously and execution is deferred. The async mode is recommended if the dependencies of scripts and other scripts or DOM are not strong, and the defer mode is recommended if the dependencies are strong. The proper choice of mode will enable JS loading mode to avoid JS blocking.
DOM optimization
DOM manipulation is time-consuming because DOM manipulation involves the interaction between the JS engine and the rendering engine (the browser kernel). Interaction requires interfaces, and the capacity of interfaces is necessarily limited. A lot of DOM manipulation leads to a lot of interface usage, so we must reduce DOM manipulation.
Reduce DOM manipulation
Essentially, reducing DOM manipulation means using JS to assist the DOM. JS runs very fast, so you can process tasks in JS and manipulate the DOM once and for all.
- Cache variable
- Targeted change
- DOM Fragment
Asynchronous update
When we update our data using the vue.js interface, the update does not take effect immediately, but is pushed into a queue. When a batch is triggered at some point, this is an asynchronous update. Asynchronous updates help us avoid overrendering and focus on the result rather than the process.
The event loop is: Synchronization tasks, and then execute asynchronous tasks in the queue, asynchronous queue and tasks can be divided into macro and micro tasks, because script tags are macro task, so macro tasks first, macro task execution is one by one, when a macro after the task has been completed, the macro task, and then executing the task, task execution is a group of a group of, After a bunch of microtasks have been executed, the JS engine hands over control and the rendering engine goes to work, rendering and drawing pages, and finally checking for Web worker tasks and processing them if there are any.
As you can see from the event loop above, if we want to update the DOM in an asynchronous task, the best way to do it is to wrap it in Micro, since micro executes and starts rendering directly. If we put it in macro, we’ll execute the script and perform a bunch of microtasks before rendering, which doesn’t change the DOM we want to change because our macro task hasn’t been executed yet.
Avoid backflow and repainting
First we need to know what is reflux and what is redraw. Backflow: DOM changes affect the DOM geometry, so the browser needs to recalculate the location and size of each element and then draw the result, a process called reflow. ** Redraw: ** Redraw means that the browser does not need to recalculate the GEOMETRY of the DOM. DOM elements have changed color and other styles, so they can be redrawn. From the definition of backflow and repainting, it can be seen that backflow always leads to repainting, but repainting does not necessarily lead to backflow. Reflux is more important than drawing and therefore requires more attention.
To improve performance, we should avoid backflow and redraw as much as possible. Operations that change the GEOMETRY of the DOM are very performance critical, operations that add and delete nodes from the DOM tree, and operations that hide backflow on a real-time level: offsetTop, scrollTop, etc. Specific methods can be summarized as the following:
- DOM operations are cached on JS variables, then JS methods are quick to operate, and the DOM is modified once to avoid frequent changes
- Instead of changing styles line by line, use the class name to merge styles, which should be the standard CSS3 script
- To take the DOM offline, use display: None to remove the node, and then put display:block on it