One, foreword

Here’s an interview question:

Interview question: Can you explain in detail the process from entering the URL until the page is loaded?

This question can cover a wide range of topics and is suitable for carrying a body of knowledge.

Every front-end staff, if they want to go to a higher level of development, will certainly comb their knowledge system again, without a firm knowledge system, can not go to a higher place!

You don’t believe that’s a huge body of knowledge? To look down

Second, analysis of the problem dry

For this question, if the interviewer wants to know is: brief or in-depth description.

So need to answer the key words, otherwise many and miscellaneous, the effect is not good, can not grasp the key point.

Next, we will introduce the main process and in-depth detailed description respectively. I think this interview question may take about 15 minutes to finish.

Answer: it is the embodiment of basic skills, knowledge induction ability, everything, point to the end.

Elaborate: investigate each knowledge point to grasp the ability and grasp to what extent.

Third, the trunk process

After combing the browser rendering principle, JS running mechanism and JS engine parsing process, I feel like I got through the ren Du two veins. I have an overall architecture and the previous knowledge points are coherent.

1, from the browser to receive the URL to open the network request thread (related to: browser mechanism, thread and process relationship, etc.)

2, start the network thread to issue a complete HTTP request (involving: DNS query, TCP/IP request, layer 5 network protocol stack, etc.)

3. Received the request from the server to the corresponding background (involving load balancing, security interception, internal processing of the background, etc.)

4, background and foreground HTTP interaction (involving: HTTP header, response code, packet structure, cookie, etc., can mention the cookie optimization of static resources, and coding and decoding such as GZIP compression, etc.)

5. Cache issues: HTTP cache (involving: involving HTTP cache headers, etag, expired, cache-control, etc.)

6, the browser received HTTP data packet after the parsing process (involves: HTML lexical analysis, and then parsing into a DOM tree, at the same time parsing CSS to generate CSS rule tree, merge to generate render tree. Then layout, painting and rendering, composite layer composition, GPU rendering, external link processing, loaded and documentLoaded, etc.)

7, CSS visual format model (involving: element rendering rules, such as: contain block, controller box, BFC, IFC and other concepts)

8. Js engine parsing process (involving: JS interpretation stage, pretreatment stage, execution stage to generate execution context, VO (global object), scope chain, recycling mechanism, etc.)

9. Others (extend other modules: cross domain, Web security, etc.)

Start the network request thread from the url received by the browser

Involves: browser process and thread model, JS running mechanism.

1. Browsers are multi-process

(1) Browsers are multi-process;

(2) Different types of tabs start a new process;

(3) Tabs of the same type are merged into a process.

Browser processes and their functions:

1, browser process: only 1 process, (1) responsible for managing the creation and destruction of each tag; (2) Responsible for browser page display; (3) Responsible for resource management and download;

2, third-party plug-in process: can be multiple processes, responsible for the use of each third-party plug-in, each third-party plug-in use will create a corresponding process;

3. GPU process: at most 1 process, responsible for 3D drawing and hardware acceleration;

4, browser rendering process: can be a number of processes, the browser kernel, each TAB page a process, mainly responsible for HTML, CSS, JS and other files parsing, execution and rendering, as well as event processing.

2. Browser rendering process (kernel process)

Each TAB page is a browser kernel process, and each process is multi-threaded, with several classes of child threads:

(1) GUI thread; (2) JS engine thread; (3) event trigger thread; (4) Timer thread; (5) Asynchronous HTTP network request threads

It can be seen that the JS engine is a thread in the kernel process, so it is often said that the JS engine is single-threaded.

Parse the URL

When you enter a URL, it is parsed (a URL is a uniform resource locator).

URL includes several parts :(1) protocol, the protocol header, such as HTTP, HTTPS, FTP, etc. (2) host name or IP address; (3) Port number; (4) directory path; (5) query; (6) Fragment, hash value after #, used to locate a certain position

4. Separate threads for network requests

Each network request needs to open a separate thread, for example, URL resolution to HTTP protocol, a new network thread will be created to process resource download.

So the browser, based on the parsed protocol, opens a network thread to request the resource.

Start the network thread until a complete HTTP request is made

Including: DNS query, TCP/IP request building, layer 5 Internet protocol and so on.

1. Obtain the IP address from DNS

If the entered domain name needs to be resolved into an IP address by DNS, the process is as follows:

(1) If the browser has a cache, use the browser cache directly. If the browser does not have a cache, use the host cache.

(2) If no, query the DNS server for the corresponding IP address (this process is routed and cached).

Note: 1, the domain name query may go through the CDN scheduler (if the CDN has storage function);

2. DNS resolution is time-consuming. If too many domain names are resolved, the first screen load will slow down.

2. TCP/IP request construction

The essence of HTTP is TCP/IP request builds. Requires 3 handshakes rule resume connection, and 4 waves when disconnecting.

TCP divides HTTP long packets into short packets and establishes a connection with the server through three-way handshake for reliable transmission.

Three-way handshake steps:

Client: Hello, are you server? Server: Hello, I’m server, are you client? Client: Yes, I’m client.

Then, when the connection is disconnected, four waves are required (four handshakes are required because it is full-duplex).

4 wave steps:

Passive: RECEIVED the message that the channel is closed. Passive: Let me also tell you that my active channel to you is also closed. Active: Finally received the data, and then the two parties cannot communicate

TCP/IP concurrency limit

Browsers have a limit on the number of concurrent TCP connections under the same domain name (2-10). And in HTTP1.0 there is often a TCP/IP request for each resource download. Therefore, there are a lot of resource optimization schemes for this bottleneck.

Get and POST

Get and POST are both TCP/IP in nature, but they are different at the TCP/IP level except HTTP. Get generates one TCP packet, and POST generates two TCP packets.

Specifically:

(1) When a GET request is made, the browser will send the header and data together, and the server will respond with 200 (return data).

(2) In a POST request, the browser sends the headers, the server responds with 100 continue, the browser sends data, and the server responds with 200 (returns data).

3. Layer 5 network protocol stack

The client sends an HTTP request to the server, which goes through a series of processes.

The details of the request sent by the client are as follows: the application layer initiates the HTTP request, the transport layer resumes the TCP/IP connection through the three-way handshake, the IP address at the network layer, the data link layer encapsulates the frame, and the physical layer transmits the request through the physical media.

The server receives the request in reverse.

Layer 5 network protocol:

1. Application layer (DNS, HTTP) : DNS resolves to IP and sends HTTP requests.

2. Transport layer (TCP, UDP) : establish TCP connection (three-way handshake);

3. Network layer (IP, ARP) : IP addressing;

4. Data link layer (PPP) : encapsulation into frames;

5, physical layer (using physical media transmission bit stream) : physical transmission (through twisted pair, electromagnetic wave and other media).

In fact, there is a complete OSI seven layer framework, compared with more session layer, presentation layer.

OSI seven-layer framework: physical layer, data link layer, network layer, transport layer, session layer, presentation layer, application layer

Presentation layer: mainly deals with the presentation of interactive information in two communication systems, including data format exchange, data encryption and decryption, data compression and terminal type conversion, etc.

Session layer: Specifically manages conversations between different users and processes, such as controlling login and logout processes.

6. Receive the request from the server to the corresponding background

When a server receives a request, there is a lot of internal processing.

Including: load balancing

1. Load balancing

For large projects, concurrent access is too large for one server to handle. Generally, several servers form a cluster and coordinate with reverse proxies to balance the load. There is more than one way to achieve load balancing.

In a nutshell: send a request to the scheduling server (the reverse proxy server, such as nginx balanced load), and then according to the actual scheduling algorithm, scheduling server distribution of different request to the corresponding cluster server, and then scheduling server waiting for the actual server HTTP response, and feedback to the user.

2. Background processing

Typically the backend is deployed in a container. The process is as follows:

(1) The container receives the request (such as the Tomcat container);

(2) The request is then received by the daemon in the corresponding container (such as a Java program);

(3) Then it is the unified processing of the background itself, and the response results after the processing is completed.

To be specific:

(1) Generally, some backend has unified authentication, such as security interception and cross-domain authentication;

(2) If it does not meet the authentication rules, it directly returns the corresponding HTTP packet (such as rejecting the request);

(3) if the verification is passed, it will enter the actual background code, at this time the program receives the request, and then performs the query database, a large number of calculations and so on;

(4) After the execution of the program, it will return an HTTP response package (generally this step will go through multi-layer encapsulation);

(5) Then return the packet from the back end to the front end to complete the interaction.

HTTP interaction between background and foreground

The front and back ends interact. HTTP packets are used as information carriers.

1. HTTP packet structure

A packet generally consists of a common header, a request/response header, and a request/response body

1.1 Universal head

Request Url: indicates the address of the requested Web server

Request Method: Request Method (Get, POST, OPTIONS, PUT, HEAD, DELETE, CONNECT, TRACE)

Status Code: indicates the return Status Code of the request. For example, 200 indicates success

Remote Address: Indicates the Remote server Address (which will be converted to AN IP Address) of the request. For example, in the case of inter-district rejection, Methord is option and the status code is 404/405.

Method is divided into two batches:

HTTP1.0 defines three request methods: GET, POST, and HEAD. Additional Request Methods: PUT, DELETE, LINK, and UNLINK

HTTP1.1 defines eight request methods: GET, POST, HEAD, OPTIONS, PUT, DELETE, TRACE, and CONNECT. For example, there are some status codes to determine:

200 -- indicates that the request was successfully completed and the requested resource is sent back to the client 304 -- The requested page has not been modified since the last request, Request client to use local cache 400 -- client request error (for example, it can be blocked by security module) 401 -- unauthorized request 403 -- Access forbidden (for example, it can be forbidden when not logged in) 404 -- resource not found 500 -- server internal error 503 -- Service unavailableCopy the code

General scope

1xx -- indicating message, request received, continue processing 2XX -- success, request received, understood, accepted 3XX -- redirection, further action must be taken to complete the request 4XX -- client error, request has syntax error or request cannot be implemented 5XX -- server error, The server failed to fulfill a valid requestCopy the code

1.2 Request headers/Response Headers

Common request headers (parts)

Accept: Accept-encoding: Indicates the compression Type supported by the browser, such as gzip. If the value exceeds the compression Type, content-type cannot be received. The type of entity content the client sends cache-control: Specifies a caching mechanism for requests and responses, such as no-cache if-modified-since: last-modified for the server to match to see If the file has changed. If-none-match (http1.1) specifies the server time for which a request is not made, and the server time for which a request is not made. The server ETag is used to match whether the file content has changed (very accurately). For example, keep-alive Host: URL of the requested server Origin: where the original request originated (only down to the port),Origin has more respect for privacy than Referer: The source URL of the page (applicable to all types of requests, down to the detailed page address, which is often used by CSRF interceptors) User-agent: Some necessary information about the User client, such as the UA headerCopy the code

Common response headers (parts)

Access-control-allow-headers: specifies the request Headers allowed by the server. Headers access-Control-allow-methods: Specifies the request Headers allowed by the server. Access-control-allow-origin: specifies the request Headers allowed by the server. Content-type: specifies the Type of entity Content returned by the server. Date: Specifies the time when data is sent from the server. Cache-control: specifies the time when data is sent from the server. 6. tell the browser or other client when it is safe to cache a document last-modified: Expires: The time at which a document should be considered expired so that it cannot be cached max-age: ETag: the current value of the entity tag of the request variable set-cookie: Sets the Cookie associated with the page. The server passes the Cookie to the client keep-alive through this header. If the client has keep-alive, the Server also responds (e.g. Timeout =38). Server: Some information about the ServerCopy the code

In general, request headers and response headers are matched for analysis.

Such as:

(1) Accept in the request header must match the content-Type in the response header; otherwise, an error will be reported.

(2) In a cross-domain request, the Origin in the request header must match the access-Control-Allow-Origin in the response header; otherwise, a cross-domain error will be reported.

(3) Using caching, if-modified-since and if-none-match of the request header correspond to last-modified and etag of the response header respectively.

1.3 Request/response Entities

In an HTTP request, there is a message entity in addition to the header.

The request entity will put some of the required parameters in (for post request).

For example :(1) entities can put serialized forms of parameters (a=1&b=2), or directly put forms (Form Data objects, upload can be mixed with other files), etc.

In the response entity, that’s what the server needs to pass to the client.

The current interface request is a JSON representation of the information in the entity, whereas a page request is simply an HTML string that the browser parses and renders.

1.4 CRLF

CRLF (Carriage-Return line-feed), which stands for Carriage Return Line Feed, usually exists as a delimiter.

There is a CRLF separation between the request header and the entity message and a CRLF separation between the response header and the response entity.

The following figure shows a brief analysis of the HTTP packet structure of a request:

Cookies and optimization

Cookie is a local storage mode of the browser. It is generally used to help the communication between the client and the server, and is often used for identity verification, combined with the session of the server.

In the login page, when the user logs in, the server will generate a session, which contains information about the user (such as user name, password, etc.) and then a sessionID (equivalent to the key corresponding to the session on the server). Then the server will write cookies in the login page. Jsessionid = XXX and then the browser has the cookie. When you visit the page under the same domain name in the future, it will automatically bring the cookie, automatic verification, no need to log in again within the valid time.Copy the code

Generally speaking, cookies are not allowed to store sensitive information (do not store user names and passwords in plain text), because it is very insecure. If you must forcibly store cookies, first of all, you must set httpOnly in the cookie (so that it cannot be operated by JS), in addition, you can consider rsa and other asymmetric encryption (because in fact, Browser native is also easy to crack, not secure)

Like this scenario:

The client has cookies under domain name A (this can be written by the server when logging in) and then there is A page under domain name A that has A lot of dependent static resources (all domain name A, for example, there are 20 static resources) and then there is A problem. When the page loads and requests these static resources, The browser is going to put cookies on it by default, so each of these 20 HTTP requests for static resources has to put cookies on it, and static resources don't actually require cookie validation so it's a huge waste and it slows down access (because there's more content)Copy the code

Of course, there are optimizations for this scenario (multi-domain splitting). The specific approach is:

(1) Group the static resources into different domain names (for example, static.base.com)

(2) The cookie for static.base.com will not be included in the request for page.base.com, so waste is avoided

Speaking of multi-domain splitting, one more question?

(1) On the mobile terminal, if the number of domain names requested is too many, the request speed will be reduced (because the whole domain name resolution process is a waste of time, and the bandwidth of the mobile terminal is generally not as good as that of PC).

(2) There is an optimization solution: dnS-prefetch (what is this for? DNS: enable the browser to resolve DNS domain names in advance when idle, but use it wisely.

For the cookie interaction, see the following summary

3. Gzip compression

First, gzip is the accept-encoding in the request header: one of the compression types supported by the browser. Gzip is a compressed format that requires browser support (which most browsers do) and has a good compression rate (up to 70%).

Then gzip is generally enabled by Apach, Nginx, Tomcat and other Web servers.

In addition to gzip’s compressed format, there is Deflate, which is not as efficient and popular as Gzip.

Therefore, you only need to enable GZIP compression on the server, and then all subsequent requests are based on gZIP compression format, which is very convenient.

4. Long connection and short connection

First let’s look at the definition of TCP/IP:

(1) Long connection: Multiple packets can be sent continuously on a TCP/IP connection. During the TCP connection, if any packets are sent, both parties need to send detection packets to maintain the connection. Generally, they need to do online maintenance (similar to heartbeat packets).

(2) Short connection: if there is data interaction between the communication parties, a TCP connection is established. After the data is sent, the TCP connection is disconnected.

Let’s look at the HTTP level:

(1) Http1.0, the default is to use a short connection, the browser each HTTP operation, establish a connection, the end of the task to break the connection, such as each static resource request is a separate connection

(2) Http1.1 starts with a long connection, which is set to connection by default: Keep-alive: In the keep-alive case, when a web page is opened, the TCP connection used to transmit HTTP between the client and the server is not closed. If the client visits the server page again, the established connection will continue.

Note: Kee-alive does not hold forever. It has a duration and is configured in general services. In addition, the foreign connection is valid only when both the client and the server support it.

5, http2.0

Http2.0 is not HTTPS, it is equivalent to the next generation specification of HTTP (HTTPS could also be the HTTP2.0 specification)

Compare the significant differences between HTTP1.1 and HTTP2.0:

(1) http1.1, each request for a resource, is required to open a TCP/IP connection, so the corresponding result is: each resource corresponds to a TCP/IP request, because TCP/IP itself has a concurrent number of restrictions, once more resources, the speed will slow down.

(2) HTTP2.0, a TCP/IP request can request multiple resources, that is to say, as long as a TCP/IP request, you can request multiple resources, separated into smaller frame requests, speed significantly improved.

Therefore, if the full application of HTTP2.0, a lot of optimization in HTTP1.1 do not need to use (such as: Sprite graph, static group multi-domain splitting, etc.).

Here are some features of HTTP2.0:

(1) multiplexing (one TCP/IP can request multiple resources);

(2) header compression (HTTP header compression, reduce volume);

(3) binary framing (add a binary framing layer between the application layer and the transmission layer to improve transmission performance and achieve low latency and high throughput);

(4) Server-side push (the server can send multiple responses to a request from the client to actively notify the client);

(5) Request priority (If a stream is assigned a priority, it will be processed based on this priority, with the server deciding how many resources are needed to process the request)

6, HTTPS

HTTPS is a secure version of HTTP. Some payment services, for example, are based on HTTPS because HTTP requests are so insecure.

In a nutshell, THE difference between HTTPS and HTTP is that an SSL link is established before a request is made, ensuring that subsequent traffic is encrypted and cannot be easily intercepted and analyzed.

Generally speaking, you need to upgrade your site to HTTPS, you need backend support (which requires certificate application, etc.), and then HTTPS is more expensive than HTTP (because of the extra resume security links, encryption, etc.), so generally HTTP2.0 is better with HTTPS (faster).

The main focus is on the SSL/TLS handshake flow, as follows (brief) :

(1) The browser requests to establish an SSL link, and sends a random number (Client Random) and the encryption method supported by the client to the server, such as RSA encryption, which is plaintext transmission.

(2) The server selects a set of encryption algorithms and hash algorithms, returns a random number (Server Random), and sends its identity information back to the browser in the form of a certificate (the certificate contains the website address, asymmetric encrypted public key, certificate authority and other information).

(3) After the browser receives the server certificate:

1, first verify the validity of the certificate (whether the authority is legitimate, whether the certificate contains the same website as the one being visited), if the certificate is trusted, the browser will display a small head lock, otherwise there will be a prompt.

2. After the user receives the certificate (whether trusted or not), the browser will generate a new random number (Premaster Secret), and then the public key in the certificate and the established encryption method will encrypt Premaster Secret and send it to the server.

3. Use Client Random, Server Random and Premaster Secret to generate symmetric encryption key- ‘sessionkey’ for HTTP link data transmission through certain algorithms

4. Use the hash algorithm to calculate the handshake message, encrypt the message with the generated session key, and finally send all the previously generated information to the server.

(4) The server receives a reply from the browser

1. Decrypt your private key with a known encryption method to obtain Premaster secret.

Create session key using the same rules as browser.

3. Decrypt the handshake message sent by the browser using the session key and verify whether the hash is the same as that sent by the browser.

4. Use the session key to encrypt a handshake message and send it to the browser

(5) The browser decrypts and computes the hash value of the handshake message. If the hash value is the same as that sent by the server, the handshake ends.

All subsequent HTTPS communication data will be encrypted using the session key generated by the previous browser using symmetric encryption algorithms

Caching: HTTP caching

Caching is a major efficiency factor in HTTP interactions.

Strong cache vs. weak cache

Caches can be simply divided into two types: strong cache (200 from cache) and negotiated cache (304);

Distinction brief introduction:

(1) Strong cache (200 from cache), if the browser determines that the local cache is not expired, it directly uses it without initiating HTTP requests. (2) Negotiation cache (304), the browser will send an HTTP request to the server, and then the server tells the browser that the file has not changed, so that the browser allows the user to cache locally.Copy the code

For the negotiated cache, you can force the refresh with CTRL + F5 to invalidate the negotiated cache.

For mandatory caching, the resource path must be updated before new requests can be sent.

2. Brief description of the cache header

How do YOU differentiate strong cache from negotiated cache in your code?

Header control via different HTTP.

Belonging to a forced cache:

(http1.1) cache-control/max-age (http1.0) Pragma/ExpiresCopy the code

Note: cache_control values are: public, private, no-store, no-cache, max-age

Belonging to the negotiated cache:

(http1.1) if-none-match/e-tag (http1.0) if-modified-since/last-modifiedCopy the code

There is also a meta tag in HTML pages that controls caching schemes -Pragma

<META HTTP-EQUIV="Pragma" CONTENT="no-cache">
Copy the code

However, this solution is rarely used because of poor support, such as the caching proxy server certainly does not support, so it is not recommended.

3. Cache header differences

In HTTP1.1, there are some new things that make up for http1.0.

Cache control in HTTP1.0:

Pragma: strictly not a header for cache control, setting no-cache invalidates the local cache (which is compilation control to implement specific instructions).

(2) Expires: A server-side configuration, which is a strong cache, is used to control the browser’s ability to send a large number of requests before a specified time. Instead, the browser directly uses the local cache. Note: Expires usually represents a server-side time, such as: Expires: Fri, 30 Oct 1998 14:19:41

(3) if-modified-since/last-modified: these two are paired and belong to the negotiated cache. The browser header is if-modified-since, and the server is last-modified. If the two matches are successful, the server resource has not changed. The server does not return the resource entity, but returns the header, telling the browser to use the local cache. Last-modifed Indicates the Last modification time of a file. The accuracy can only be less than 1S.

Caching control in HTTP1.1:

(1) cache-control: indicates the control header of the cache, including nocache and max-age.

(2) max-age: configured by the server to control the strong cache. Within a specified period of time, the browser does not need to issue a request and directly uses the local cache. Max-age is the value of cache-control, for example, cache-control: max-age=60 x 1000. The value is the absolute time, which is calculated by the browser.

(3) if-none-match/e-tag: If-none-match is the header of the browser, and e-tag is the server. Similarly, If the request is sent, if-none-match and e-tag Match, indicating that the content has not changed, the browser is told to use the local cache. Unlike last-Modified, e-tag is more accurate. It is similar to a fingerprint, and is generated based on the Size of the FileEtag INode Mtime.

Cache-control versus Expires?

1. Both are mandatory caches.

Expires uses server time. Because time zones and browser local time can be modified, http1.1 does not recommend Expires. The max-age of cache-control is the absolute time local to the browser.

Cache_control has a high priority when cache-control and Expires are used together.

E-tag compared to last-Modified?

1. Both are negotiation caches.

2. Last-modified refers to the time when the server file was Last modified. The defect is that the time is only 1s. E-tag is a fingerprint mechanism. As long as the file is changed, the E-tag will change immediately. There is no precision limit.

3. The e-tag has a higher priority than the last-Modified tag.

The overall relationship between the cache headers is shown below

Parse the page flow

This is HTTP interaction, and then the browser gets the HTML, parses it, and renders it.

1. Brief description of the process

Once the browser kernel gets its hands on the content, rendering is roughly divided into the following steps:

(1) Parse HTML and build DOM tree; At the same time, the CSS is parsed and the CSS rule tree is generated.

(2) Combine DOM tree and CSS rule tree to generate Render tree.

(3) Render tree (layout/reflow), responsible for the size of each element, position calculation.

(4) Render tree (paint), draw the page pixel information.

(5) The browser will send the information of each layer to GPU. The GPU will composite the layers and display them on the screen.

The diagram below:

2. HTML parsing and DOM construction

This step goes like this: The browser parses the HTML and builds the DOM tree. Actually, expand it a little bit.

The process from parsing HTML to building dom is summarized as follows:

Bytes -> characters -> tokens -> Nodes ->DOM

<html> <head> <meta name="viewport" content="width=device-width,initial-scale=1"> <link href="style.css" rel="stylesheet"> <title>Critical Path</title> </head> <body> <p>Hello <span>web performance</span> students! </p> <div><img src="awesome-photo.jpg"></div> </body> </html>Copy the code

The browser does the following:

To list some of the key processes:

1. Conversion: The browser will obtain HTML content (Bytes) into a single character based on its encoding. 2. Tokenizing word segmentation: the browser will convert these characters into different token according to the HTML specification standard. Each token has its own unique meaning and set of rules. 3. Lexing: The result of word segmentation is a bunch of tokens, which are then converted into objects that define their properties and rules respectively. Because HTML tags define relationships between different tags, this relationship is like a tree structureCopy the code

For example, the parent of the body object is the HTML object, and the parent of the segment P object is the DOM tree at the end of the body object:

3, CSS parsing, build CSS rule tree

The CSS rule tree generation is similar

Bytes → characters → tokens → nodes → CSSOM
Copy the code

For example: style.css

body { font-size: 16px }
p { font-weight: bold }
span { color: red }
p span { display: none }
img { float: right }
Copy the code

The final CSSOM tree is

4. Build the render tree

Once the DOM tree and CSSOM are in place, it’s time to build the render tree. In general, render trees correspond to DOM trees, but not strictly one-to-one.

This is because some invisible DOM elements are not inserted into the render tree, such as invisible tags like head or display: None

5, rendering

With the Render tree in place, it’s time to start rendering. The basic flow is as follows:

The four important steps in the diagram are:

(1) Calculate CSS styles;

(2) Build rendering tree;

(3) Layout, main positioning coordinates and size, whether to wrap, various position overflow Z-index attributes;

(4) draw, draw out the image.

Then, the lines and arrows in the diagram represent dynamic changes to the DOM or CSS through JS, resulting in a Layout or Repaint

There is a difference between Layout and Repaint:

(1) Layout, also known as Reflow, that is, Reflow. This typically means that the content, structure, position, or size of an element has changed, requiring recalculation of styles and rendering trees.

(2) Repaint. Meaning that the changes to the element only affect the appearance of the element (for example, the background color, border color, text color, etc.), then simply apply the new style to the element.

The cost of backflow is higher than redrawing, and backflow of a node often leads to backflow of child nodes and nodes at the same level, so the optimization plan generally includes to avoid backflow as far as possible.

6. What causes reflux

1. Page rendering initialization

2.DOM structure changes, such as deleting a node

3. Render tree changes, such as reduced padding

4. Window resize

5. One of the most complex: obtaining certain attributes, reflux, many browsers do optimization of reflow, would wait until enough to do a batch reflow, but direct change, in addition to the render tree when access to some of the properties, the browser in order to obtain the correct value will also trigger circumfluence, which makes browser optimization is invalid, including

Offset (Top/Left/Width/Height) (2) Scroll (Top/Left/Width/Height) (3) cilent(Top/Left/Width/Height) (4) Width, Height (5) Call getComputedStyle() or IE currentStyleCopy the code

Reflow must accompany repainting, but repainting can occur alone.

Optimization scheme:

(1) Reduce item by item change style, do a one-time change style. Or define the style as class and update it once.

(2) Instead of looping around the DOM, create a documentFragment or div, perform all dom operations on it, and add it to window.document.

(3) Avoid multiple reads of attributes such as offset, and cache them in variables if necessary.

(4) Location complex elements absolutely or fixedly out of the document flow, otherwise the cost of backflow is very high.

Note: Changing the font size can cause backflow.

Here’s another example:

var s = document.body.style; s.padding = "2px"; // backflow + redraw s.order = "1px solid red"; // Redraw s.color = "blue"; // Redraw s.buckgroundcolor again = "# CCC "; // Redraw s.ontSize = "14px" again; / / return again + redrawing / / add a node, reflux + redraw the document again. The body. The appendChild (document. The createTextNode (' ABC! '));Copy the code

7. Simple layer and composite layer

The above rendering stops at drawing, but actually drawing is not so simple, it can be combined with the concept of composite layer and simple layer.

A brief introduction:

(1) There can be only one composite layer by default, and all DOM nodes are under this composite layer.

(2) If the hardware acceleration function is enabled, a node can be turned into a composite layer.

(3) The drawing between composite layers does not interfere with each other, and is directly controlled by GPU.

(4) In simple layers, even with absolute layout, the change does not affect the overall backflow, but the drawing is still affected because it is on the same layer, so the performance of animation is still very low. Also, the composite layer is independent, so hardware acceleration is generally recommended for animation.

More reference: segmentfault.com/a/119000001…

8, Chrome debugging

In Chrome’s developer tools, you can see the detailed rendering process in Performance:

9, the download of resources outside the chain

Above introduced HTML parsing, rendering process. In reality, however, when parsing THE HTML, you encounter some resource connections that require separate processing.

For simplicity, the static resources encountered are grouped into the following categories (not all listed) :

(1) CSS style resources

(2) JS script resources

(3) IMG image resources

(1) Encountered outside the chain of processing

When an external link is encountered, a separate download thread is opened to download the resource.

(2) Encounter CSS style resources

CSS resource processing features:

(1) CSS download asynchronous, will not block the browser to build the DOM tree;

(2) However, it will block rendering, that is, when building the render tree, it will wait until the CSS is downloaded and parsed (related to browser optimization, to prevent CSS rules from constantly changing, to avoid repeated construction).

(3) With the exception of CSS that encounters a Media Query declaration, the render tree is not blocked

(3) Js script resources are encountered

The processing of JS script resources has several characteristics:

(1) Block the parsing of the browser, that is to say, when an external link script is found, the parsing of HTML will continue after the script is downloaded and executed.

(2) the optimization of the browser, general modern browsers have optimization, in a script block, will also continue to download other resources (of course with a concurrent limit), but even though the script can be downloaded in parallel, the parsing process is still blocked, which means to be after the script execution is the next parsing, parallel downloads are only an optimization.

(3) Defer and Async. Normal scripts block browser parsing, but you can add defer or async properties to make the script asynchronous and execute it after parsing.

Note that defer and async are different: defer executes, while async executes asynchronously.

To put it simply:

(1) Async is an asynchronous execution. It will be executed after the asynchronous downloading is completed. The execution order is not guaranteed, but it must be before or after the onLoad event.

(2) Defer is deferred and looks to the browser as if the script was placed after the body (although the specification should be before the DOMContentLoaded event, in fact the optimisation will vary from browser to browser and may be behind it).

(4) Img image resources are encountered

When you encounter images and other resources, it is directly asynchronous download, will not block parsing, directly replace the original SRC place with pictures after downloading

10, Loaded and domcontentloaded

Contrast:

(1) The DOMContentLoaded event is triggered only when DOM loading is complete, excluding stylesheets and images (such as async loaded scripts may not be complete).

(2) When the load event is triggered, all the DOM, stylesheets, scripts, and images on the page have been loaded.

10 and the Reference

1, github.com/saucxs/full…

2, blog.csdn.net/sinat_21455…

3, book.douban.com/subject/269…

4, github.com/kaola-fed/b…

The End

If you find this article helpful and inspiring, I would like to ask you to do me two small favors:

1, click “watching”, so that more people can see the content of this article

2, pay attention to the public account “Songbao write code”, the public account back to “add group” join us to learn together.