Rendering line
Render pipeline view of CSS
Below is a page rendering pipeline with CSS:
The first is to make a request from the main page, which can be a renderer or a browser process, and is sent to the web process for execution. After the network process receives the returned HTML data, it sends it to the renderer process, which parses the HTML data and builds the DOM. You need to be special hereNote that there is an idle time between requesting HTML data and building the DOM, and this idle time can become a bottleneck for page rendering.
The rendering pipeline requires CSSOM
Like HTML, a rendering engine cannot understand the content of a CSS file directly, so it needs to be parsed into a structure that the rendering engine can understand, which is CSSOM. Like DOM, CSSOM serves two purposes: first, it provides JavaScript with the ability to manipulate stylesheets, and second, it provides basic style information for composing layout trees. This CSSOM is embodied in the DOM as Document. StyleSheets.
Factors affecting page presentation and optimization strategies
The rendering pipeline affects the speed of the first page display, which in turn directly affects the user experience
So let’s take a look at the three visual stages from making the URL request to first displaying the content of the page.
- The first stage, after the request has been sent, is the submit data stage, where the page will display the same content as the previous page.
- In the second stage, after submitting the data, the renderer creates a blank page, which is often referred to as parsing the white screen, waits for the CSS and JavaScript files to load, generates the CSSOM and DOM, synthesizes the layout tree, and then goes through a series of steps to prepare for the first rendering.
- The third stage, after the first rendering, is the generation stage of the full page, and then the page is drawn bit by bit.
Typically, the bottleneck is in downloading CSS files, downloading JavaScript files, and executing JavaScript. Therefore, in order to shorten the blank screen duration, the following strategies can be adopted:
- Remove these two types of file downloads by inlining JavaScript and CSS so that once the HTML file is retrieved, the rendering process can begin directly.
- While inlining is not always appropriate, you can minimize the file size by using tools like Webpack to remove unnecessary comments and compress JavaScript files.
- You can also async or defer on JavaScript tags that don’t need to be used at the PARSING HTML stage.
- You can query the properties of large CSS files by media and divide them into multiple CSS files for different purposes. In this way, specific CSS files can be loaded only in specific scenarios.
The above strategies can shorten the duration of the white screen display, but in actual projects, there are always a variety of situations, these strategies can not be arbitrarily quoted, so it is necessary to adjust the best solution according to the actual situation.
Stratification and synthesis mechanisms
Principle of display image:
Each monitor has a fixed refresh rate, usually 60 Hz, of 60 images per second. The updated images come from a place called the front buffer on the graphics card. The monitor’s task is simple: it reads the images from the front buffer 60 times per second and displays them on the monitor.
Frame and frame rate concepts:
Each image generated by the rendering line is called a frame and the number of frames updated by the rendering line per second is called the frame rate. For example, if 60 frames are updated in a second while scrolling, the frame rate is 60Hz (or 60FPS).
How does a rendering engine implement a frame of image?
The rendering engine generates a frame in three ways: redraw, rearrange, and compose
-
Redraw and rearrange are performed on the main thread of the rendering process, which is time-consuming;
-
Compositing is performed on the renderer’s compositing thread, which is fast and does not occupy the main thread.
How does the browser implement compositing?
- Layering: Breaking up the material into layers improves rendering efficiency at a macro level.
- Chunking: Improves rendering efficiency at a micro level.
- Composition: The operation of combining layers together
How do I systematically optimize pages?
Page optimization is all about making your pages display and respond faster.
Start by analyzing the three phases of the page life cycle: the load phase, the interaction phase, and the close phase
- Loading stage refers to the process from sending a request to rendering a complete page. The main factors affecting this stage are the network and JavaScript scripts.
- Interaction stage is mainly the integration process from page loading to user interaction, and the main factor affecting this stage is JavaScript script.
- The closing stage is mainly some cleaning operations done by the page after the user issues the closing instruction.
Loading stage
The rendering pipeline in loading stage is shown as follows:
Does not block the first rendering of the page such as images, audio, video, etc.
Blocks first rendering such as JavaScript, first requested HTML resource files, and CSS files. (Because you need HTML and JavaScript files to build the DOM, you need CSS files to build the render tree)
These resources that block the first rendering of a page are called critical resources
Based on key resources, three core factors affecting the first rendering of a page are refined:
- Number of key resources.
- Key resource size
- How many Round Trip Times does it take to request a critical resource?
RTT indicates the total delay between the time when the sender sends data and the time when the sender receives the acknowledgement from the receiver. It is an important performance index in network.
Optimization scheme: The overall optimization principle is to reduce the number of key resources, the size of key resources, and the RTT times of key resources
How can I reduce the number of critical resources?
- One way to do this is to change JavaScript and CSS to an inline format, such as the one shown above. If both JavaScript and CSS are inline, the number of key resources is reduced from 3 to 1.
- Alternatively, if your JavaScript code doesn’t have DOM or CSSOM operations, you can change the async or defer properties; Also with CSS, if it is not loaded before the page is built, you can add a media cancel flag to prevent display. When JavaScript tags add async or defer, and CSSlink properties are preceded by an unblock display flag, they become non-critical resources.
How can I reduce the size of critical resources?
- You can compress CSS and JavaScript resources, remove HTML, CSS, JavaScript files from some of the comments, you can also cancel CSS or JavaScript key resources by the way described earlier.
How can I reduce the number of RTTS for critical resources?
- This can be achieved by reducing the number and size of key resources. In addition, CDN can also be used to reduce the duration of each RTT.
Interaction stage
In the interaction phase, the rendering speed of the frame determines the smoothness of the interaction. The rendering pipeline of the interaction stage is shown in the figure below:
Optimization scheme:Is to make the generation of individual frames faster.
- Reduce JavaScript script execution time
- One way is to break a function that executes once into multiple tasks so that each execution takes less time.
- The other is to adopt Web Workers. There is no DOM, CSSOM environment in Web Workers, which means that you cannot access the DOM through JavaScript in Web Workers.
- Avoid forced synchronization of layouts
This is when JavaScript forces computing style and layout operations to be advanced to the current task
- Avoid layout jitter
Avoid repeating calculation styles and layouts, and try not to query for values when you modify the DOM structure.
- Make good use of CSS compositing animation
If you know in advance that you are animating an element, it is best to mark it as will-change, which tells the rendering engine that it needs to create a separate layer for that element.
- Avoid frequent garbage collection
Virtual DOM
Analyzing the shortcomings of DOM:
Manipulating the DOM directly can trigger a cascade of reactions in the rendering pipeline, and can even trigger forced synchronization and layout jitter if the DOM is not handled properly, which is why we need to be very careful when manipulating the DOM.
How does the virtual DOM solve the problem of manipulating the DOM directly?
- Apply page changes to the virtual DOM rather than directly to the DOM.
- When changes are applied to the virtual DOM, the virtual DOM does not rush to render the page, but merely adjusts the internal state of the virtual DOM, which makes manipulating the virtual DOM very easy.
- When enough changes have been collected for the virtual DOM, those changes are applied to the real DOM once and for all.
Combining the above analysis, how does the virtual DOM work
- Creation phase. First, a virtual DOM is created based on JSX and the underlying data, which reflects the structure of a real DOM tree. Then the real DOM tree is created from the virtual DOM tree. After the real DOM tree is generated, the rendering pipeline is triggered to output the page to the screen.
- In the update phase, if the data changes, a new virtual DOM tree needs to be created based on the new data. React then compares the two trees to find out where the changes are and updates the changes to the real DOM tree at once. Finally, the rendering engine updates the rendering pipeline and generates new pages.
The Core algorithm for the React Fiber update mechanism is Reconciliation. When the virtual DOM could not be resolved with complexity, the React team rewrote the Reconciliation algorithm, which is called Fiber Reconciler. The old algorithm was called Stack Reconciler.
Dual cache and MVC pattern
- Double cache is a classic idea, which is applied in many occasions to solve the problems of invalid page refresh and flash screen. Virtual DOM is a manifestation of the idea of double cache.
With double cache, the intermediate results of the calculation can be stored in another buffer first. After all the calculation is finished and the complete graph has been stored in the buffer, the graph data of the buffer can be copied to the display buffer once, so that the output of the whole image is very stable.
- The core idea of MVC is to separate the data from the view and communicate between them through the controller.
MVC infrastructure:
Build MVC model based on React and Redux
In this figure, we can think of the virtual DOM as the view part of MVC, with its controller and model provided by Redux. The specific implementation process is as follows:
- The controller in the figure monitors DOM changes and notifies the model to update the data when the DOM changes.
- After the model data is updated, the controller notifies the view that the model data has changed.
- When the view receives the update message, it generates a new virtual DOM based on the data provided by the model.
- After the new virtual DOM is generated, it is necessary to compare with the previous virtual DOM to find out the changed nodes.
- After comparing the changed nodes, React applies the changed virtual nodes to the DOM, which triggers DOM node updates.
- DOM node changes trigger a series of subsequent changes in the rendering pipeline to update the page.
Progressive Web Applications (PWA)
There are three major evolution paths for browsers:
- The first is webization of applications;
- The second is mobile Web applications;
- The third is the Web operating system;
How does PWA access mobile?
PWA stands for Process Web APP, a progressive Web application. It literally means “progressive +Web application”.
Definition: A set of ideas that progressively enhance the advantages of the Web and progressively reduce the distance from native applications or applets through technical means. Any technology based on this philosophy can be classified as PWA.
Web application defects
- First of all, Web applications lack the ability to be used offline, and are basically unusable in offline or weak network environments. What users need is immersive experience. It is the basic requirement of users for an application to be able to use it smoothly in offline or weak network environment.
- Secondly, Web applications also lack the ability to push messages, because as an App vendor, you need to be able to send messages to the application.
- Finally, Web applications lack a first-level entry point, which is to install a Web application on the desktop and open it directly from the desktop when needed, rather than having to open it through a browser every time.
PWA proposed two solutions to the above Web defects: try to solve the offline storage and message push problems by introducing the Service Worker, and solve the level 1 entry problems by introducing manifest.json. Let’s take a closer look at how Service workers work.
What is a Service Worker
Let’s take a look at how the Service Worker addresses offline storage and notification push issues.
In fact, before the Service Worker, the WHATWG group introduced the App Cache standard to Cache pages. However, the App Cache standard was exposed to a lot of problems in the process of using and was ridiculed by many people, so this standard had to be abandoned eventually. Therefore, a successful criterion needs to experience practical considerations.
Therefore, in 2014, the Standards Committee proposed the concept of Service Worker. Its main idea is to add an interceptor between the page and the network, which is used to cache and intercept requests. The overall structure is shown in the figure below:
Before Service Worker is installed, WebApp requests resources directly through the network module. After the Service Worker module is installed, when WebApp requests resources, it will first pass the Service Worker and let it decide whether to return the resources cached by the Service Worker or request resources from the network again. All control goes to the Service Worker.
WebComponent
componentization
In fact, there is no clear definition of componentization, but there are 10 words to describe what is componentization: high cohesion inside, low coupling outside. Internally, each element is closely combined and interdependent with each other, while externally, there is little connection with other components and the interface is simple.
CSS and DOM are two obstacles to componentization
- The global properties of CSS hinder componentization
- There is only one DOM on the page, and the DOM can be read and modified directly from anywhere
For this reason, WebComponent is a combination of technologies including Custom Elements, Shadow DOM, and HTML templates, Refer to the MDN link for more details
Custom elements, shadow DOM, and HTML templates are three techniques that allow developers to isolate CSS and DOM.
Write in the last
Learning resources from Geek Time – Teacher Li Bing “Browser working principle and Practice”. Next, let’s check in every day
- Day 01 Chrome architecture: Why 4 processes with only 1 page open?
- Day 02 TCP: How to ensure that a page file can be delivered to the browser in its entirety?
- Day 03 HTTP Request Flow: Why do many sites open quickly the second time?
- Day 04 Navigation flow: What happens between entering the URL and presenting the page?
- Day 05 Rendering Flow: How do HTML, CSS, and JavaScript become pages?
- JavaScript execution mechanisms in the Day 06 browser
- Day 07 V8 how it works
- Page loop system in Day08 browser
- Page in the Day09 browser