1. What happens from entering the URL to presenting the page?
(DNS domain name resolution, TCP links, HTTP requests, HTTP responses, rendering)
(1) Parse THE URL, analyze the protocol header, and then analyze whether the host name is a domain name or an IP address;
(2) If the host name is a domain name, a DNS query request is sent to the DNS server to obtain the host IP address
(3) use DNS to get the host IP address, establish TCP link, send a (HTTP/HTTPS /protocol) request to the destination address, Add port information on the network Socket (HTTP 80 HTTPS 443) automatically; If a strong cache is hit, no request is sent.
(4) Wait for the response result of the server;
(5) Render tree is obtained after the response result (HTML) is parsed by the browser engine. The browser will Render the rendering tree and display it in the display. The user can see the page being rendered at this time.
2. DNS resolution process
3. What is CDN service?
CDN is a content distribution network. It provides users with the function of nearest access to resources by caching source website resources and utilizing multiple servers located in different regions and operators. In other words, the user’s request is not directly sent to the source website, but to the CDN server, which will locate the request to the nearest server containing the resource to request. This helps speed up the web site and, in this way, reduces the load on the source server.
4. CDN access process
(1) After the user enters the domain name, the OPERATING system queries the IP address of the domain name from LocalDns.
(2) The authorized server for LocalDns to query domain names from the ROOT DNS (assume that the LocalDns cache expires);
(3) The ROOT DNS replies the domain name authorization DNS record to the LocalDns.
(4) After obtaining the authorized DNS record of the domain name, LocalDns queries the IP address of the domain name from the authorized DNS.
(5) Domain name authorization DNS query domain name records (generally CNAME), reply to LocalDns;
(6) After LocalDns obtains the domain name record, it queries the IP address of the domain name from the intelligent scheduling DNS.
(7) Intelligent scheduling DNS responds the most suitable CDN node IP address to LocalDns according to certain algorithms and policies (such as static topology, capacity, etc.);
(8) LocalDns will get the domain name IP address, respond to the client;
(9) After obtaining the domain name IP address, the user accesses the site server;
(10) THE CDN node server responds to the request and returns the content to the client (the cache server saves the data locally for future use on the one hand and returns the obtained data to the client on the other hand to complete the data service process);
5. TCP three-way handshake
(1) The browser sends a SYN packet saying it wants to establish a connection.
(2) The server replies with ACK+SYN saying ok.
(3) After receiving an ACK+SYN packet, the browser sends an ACK packet saying ok, indicating that the TCP connection is successful.
6. Browser caching
Browser caches are divided into strong and negotiated caches,
Strong caching: Cache-control /expires in response headers (priority cache-control> Expires)
Negotiation cache: last-modified/if-modified-since Etag/ if-none-match is the first field in the response header and the Last field in the request header
In HTTP/1.1, cache-control is the most important rule, which is mainly used to Control web caching. Its main values are:
-
Public: All content will be cached (both client and proxy can be cached)
-
Private: All content can be cached only by the client. The default value of cache-control
-
No-cache: indicates the contents of the client cache, but whether to use the cache is verified by the negotiated cache
-
No-store: All content is not cached, that is, neither mandatory cache nor negotiated cache is used
-
Max-age = XXX (XXX is numeric) : The cache contents will expire after XXX seconds
Last-modify/if-modify-since principle (Etag/ if-none-match principle is similar)
When a browser requests a resource for the first time, the server returns a header with last-modified. Last-modified is the time when the resource was Last Modified, such as last-Modified: Thu,31 Dec 2037 23:59:59 GMT
When the browser requests the resource again, the request header contains if-modified-since, which is the last-modified value returned before the cache. After receiving if-modified-since, the server determines whether the resource hit the cache based on the last modification time.
If the cache is hit, 304 is returned, and the resource content is not returned, and last-Modified is not returned.
Note that :(1)expires is a product of http1.0 and exists primarily for compatibility. If cache-control is used, expires is simply ignored.
(2) ETag compares the eigenvalue of the response content, and the value will change every time the content changes. It is generated based on algorithms such as hash and has a certain complexity. Last-modified compares the modification time of the response content and is only accurate to the second level. The two go hand in hand. It’s not that last-Modified doesn’t mean last-Modified doesn’t mean ETag. When passed to the server at the same time, the server can select ETag or Last-Modified as the cache mechanism requires, or even both.
7. How browsers render
(1) First parse the received document and build a DOM tree according to the document definition. DOM tree is composed of DOM elements and attribute nodes.
(2) Then the CSS is parsed to generate the CSSOM rule tree.
(3) Build a rendering tree according to DOM tree and CSSOM rule tree. The node of the render tree is called a render object, which is a rectangle containing properties such as color and size. The render object corresponds to the DOM element, but this correspondence is not one-to-one. Invisible DOM elements are not inserted into the render tree. There are also DOM elements that correspond to several visible objects, which are typically elements with complex structures that cannot be described by a rectangle.
(4) When rendered objects are created and added to the tree, they have no location or size, so when the browser generates the render tree, the layout is based on the render tree (also known as backflow). All the browser has to do at this stage is figure out the exact location and size of each node on the page. This behavior is often referred to as “automatic reordering.”
(5) The layout phase is followed by the draw phase, which iterates through the render tree and calls the paint method of the render objects to display their contents on the screen, drawing using the UI base components. It is worth noting that this process is done gradually, and for the better user experience, the rendering engine will render the content to the screen as early as possible, rather than wait until all the HTML has been parsed before building and laying out the Render tree. It parses part of the content and displays part of the content, while probably downloading the rest of the content over the network.
8. How to deal with JS files in the rendering process? (Browser parsing process)
The loading, parsing, and execution of JavaScript blocks the parsing of the document, which means that when the HTML parser encounters JavaScript while building the DOM, it suspends the parsing of the document, handing control to the JavaScript engine. When the JavaScript engine is finished, the browser picks up where it left off and continues parsing the document. That said, if you want the first screen to render as quickly as possible, you should not load JS files on the first screen, which is why it is recommended to place the script tag at the bottom of the body tag. At the moment, of course, it’s not necessary to put the script tag at the bottom, as you can add defer or async properties to the script tag.
9. What do async and defer do? What’s the difference? (Browser parsing process)
(1) The script does not defer or async, and the browser loads and executes the specified script immediately, that is, without waiting
Document elements to be loaded later are read and loaded and executed.
(2) The defer attribute represents deferred execution of the imported JavaScript, meaning that the HTML does not load when the JavaScript is loaded
Parsing is not stopped; the two processes are parallel. Execute the script file after the entire document has been parsed, before the DOMContentLoaded event is triggered. Multiple scripts are executed sequentially.
(3) The async property represents asynchronous execution of the introduced JavaScript, the difference from defer is that if it has already been added
When loaded, execution will begin, which means that its execution will still block parsing of the document, but its loading process will not. The order of execution of multiple scripts is not guaranteed.
What is the difference between DOMContentLoaded event and Load event?
When the initial HTML document is fully loaded and parsed, the DOMContentLoaded event is fired,
Without waiting for stylesheets, images, and subframes to load. The Load event is triggered when all resources have been loaded.
How does CSS block document parsing? (Browser parsing process)
CSS itself does not block document parsing, but DOM rendering, and CSS loading blocks the execution of subsequent JS statements.
In theory, since stylesheets don’t change the DOM tree, there’s no reason to stop parsing documents and wait for them, however,
One problem is that JavaScript script execution may request style information during document parsing, and if the style has not been loaded and parsed, the script will get the wrong value, which can obviously cause a lot of problems.
So if the browser hasn’t finished downloading and building CSSOM, and we want to run the script at this point, then
The browser will defer JavaScript script execution and document parsing until CSSOM is downloaded and built. That is, in this case, the browser downloads and builds CSSOM, then executes JavaScript, and finally continues parsing the document.
11. What is redraw and reflow? (Browser drawing process)
Redraw: When some elements in the render tree need to update their attributes, and these attributes only affect the element’s appearance, style, and
Operations that do not affect the layout, such as background color, are called redraws.
Backflow: When a part (or all) of the render tree needs to be rebuilt due to changes in the size, layout, hiding, etc., and the layout is affected by the operation of backflow.
Common backflow causing properties and methods:
Any operation that changes the geometry of an element (its position and size) triggers backflow.
(1) Add or remove visible DOM elements;
(2) Element size changes — margins, padding, borders, width, and height
(3) Content changes, such as user input text in the input box
(4) The browser window size changes — when the resize event occurs
(5) Calculate offsetWidth and offsetHeight properties scrollWidth and clientWidth
(6) Set the style property value
(7) When you change font size.
Backflow must occur redraw, redraw does not necessarily cause backflow. The cost of backflow is much higher, and changing the child node in the parent node is likely to result in a series of backflows in the parent node.
12. How to reduce backflow? (Browser drawing process)
(1) Replace top with transform
(2) Do not place node attribute values in a loop as variables in the loop
(3) Do not use the table layout, a small change may cause the entire table layout
(4) Modify the DOM offline. For example, use the documentFragment object to manipulate the DOM in memory
(5) Don’t change the STYLE of the DOM line by line. Instead of doing this, you should define your CSS class in advance,
Then change the className of the DOM.
13. Why is DOM manipulation slow? (Browser drawing process)
Some DOM manipulation or property access can cause backflow and redrawing of the page, resulting in a performance cost.
14. Please describe the difference between cookies, sessionStorage and localStorage
The common storage technologies on the browser side are cookie, localStorage, and sessionStorage.
(1) Cookie is actually a way for the server to record user status at the very beginning. It is set by the server, stored in the client, and then sent to the server every time it initiates a same-origin request. A cookie can store up to 4K data, its lifetime is specified by the Expires attribute, and the cookie can only be shared by same-origin page access.
(2) sessionStorage is a browser local storage method provided by HTML5, which draws on the server-side session
“Represents the data stored in a session. It can store 5M or more data, it expires after the current window closes, and sessionStorage can only be accessed and shared by same-source pages of the same window.
(3) localStorage is also a browser localStorage method provided by html5. It can generally store 5M or larger data. Unlike sessionStorage, it does not expire unless it is manually removed, and localStorage can only be accessed and shared by same-origin pages.
15. What is the difference between cookies and sessions? How does the server clear cookies?
The main difference is that session exists on the server and cookie exists on the client. The session is better than a cookie
Secure, and cookies may not always work (they may be blocked by browsers). The server can set cookies
Is null and sets a prompt expires to clear cookies that exist on the client.
A cookie may contain some key information, while a session is typically an encrypted string.
16. What is the difference between Canvas and SVG?
Canvas is a method of drawing 2D graphics using JavaScript. The Canvas is rendered pixel by pixel,
Therefore, when we scale the Canvas, there will be zigzag or distortion.
SVG is a language for describing 2D graphics using XML. SVG is based on XML, which means that every element in the SVG DOM is available. We can attach JavaScript event listeners to an element. And SVG keeps the drawing method of graphics, so there is no distortion when SVG graphics are scaled.
Canvas Application Scenario
Canvas provides more primitive functions, suitable for pixel processing, dynamic rendering and large amount of data drawing
SVG Application Scenarios
SVG is more sophisticated and suitable for static image display, high-fidelity document viewing and printing applications
17. The advantages and disadvantages of base64 encoding are briefly introduced.
Base64 encoding is an image processing format that uses a specific algorithm to encode an image into a long string on a page
This string can be used instead of the image’s
The url attribute.
The advantages of using Base64 are:
(1) Reduce the HTTP request for an image
The disadvantages of using Base64 are:
(1) According to the encoding principle of Base64, the encoded size will be 1/3 larger than the size of the original file
In HTML/CSS, it’s not just the body
The increase in product affects the speed of file loading and also increases the time it takes for the browser to parse and render HTML or CSS files.
(2) Base64 can not be directly cached, the cache can only cache files containing Base64, such as HTML or CSS, which is much worse than the effect of domain direct cache images.
(3) Compatibility issues, ie8 previous browsers do not support. Generally, small ICONS of some websites can be introduced using base64 images.
18. What SEO does the front end need to pay attention to?
(1) Reasonable title, description and keywords: the weight of the search decreases one by one, and the title value emphasizes more
Point can, important keywords do not appear more than 2 times, and to the front, different page title to be different;
Description summarizes the content of the page, the length is appropriate, not excessive stacking keywords, different page description is different;
Keywords list the key keywords.
(2) semantic HTML code, in line with W3C specifications: semantic code makes it easy for search engines to understand web pages.
(3) important content HTML code in the front: search engines grab HTML order from top to bottom, some search engines have restrictions on the length of capture, ensure that important content must be captured.
(4) Don’t export important content with JS: crawlers don’t execute JS to get content
(5) Less use of IFrame: search engines will not capture the content of iframe
(6) Alt must be added to non-decorative pictures
(7) improve website speed: website speed is an important index of search engine ranking
What are the front-end caches?
HTTP Cache, webStorage, Cookie, indexDB, Application Cache, PWA Mainfest, Service Workers
IndexDB is mass storage for clients
IndexDB is a browser-provided local database that can be created and manipulated by JS. In terms of database type,IndexDB is a non-relational database;
Application Cache Application Cache
HTML5 introduces application caching, which means Web applications can be cached and then accessed when there is no Internet connection.
Application caching gives applications three advantages:
- Offline browsing – Users can use applications while offline
- Fast – Cached resources load faster
- Reduced server loading – The browser only downloads updated/changed resources from the server
The manifest attribute should be included for every page in your Web application that you want to cache.
The manifest file is a simple text file that lists resources cached by the browser for offline access.
Use:
<! DOCTYPE HTML><html manifest="demo.appcache">... </html>Copy the code
The manifest file is a simple text file that tells the browser what is being cached (and what is not).
The manifest file can be divided into three parts:
-
CACHE MANIFEST
– Files listed under this heading will be cached after the first download
-
NETWORK
– Files listed under this heading require a connection to the server and will not be cached
-
FALLBACK
– Files listed under this heading specify the fallback page when the page is not accessible (such as a 404 page)
CACHE MANIFEST#. 2012-02-21 v1.0.0 / theme CSS/logo. GIF/main jsNETWORK: login. PhpFALLBACK: / HTML / / offline. HTML
PWA Mainfest progressive application
The Manifest is a JSON file that provides a series of descriptions such as themes, background colors, ICONS, etc., to make Web applications more like native applications.
<link rel="mainfest" href="/mainfest.json"> {"name":"hackerWeb", // The name of the application, "short_name":"hackerWeb", / / application name abbreviations "lang" : "en - US", / / definition language "start_url" : "", / / specified equipment qidong url" scope ":"/myapp ", // Define the range that the application can access. "display":"standalone", // define the developer's preferred display mode for web applications :fullscreen- fullscreen display,standalone- like standalone application,minimal- UI - standalone application with browser bar, browser-browser display, fullscreen > standalone > minimal-ui > browser "background_color":"#fff", // Define the default theme color" description":"test for app", // define the description" dir":" LTR ", // Name, short name, description text orientation, LTR - left to right, RTL - right to left,auto- browser automatically judge "orientation":"any", // Define the default direction of the web application's top-level "ICONS ":[// array of ICONS {" SRC ":"icon/lower.png", // image path "size":48 x 48, }], "prefer_related_applications", "prefer_related_applications", "Related_applications ":[// Specify the underlying native application that can be installed or accessed {"platform":"web" // Find application platform, "url" : "", / / find the application url" id ":" ", / / specified platform application id}]} / / https://juejin.cn/post/6933231057823072264#heading-4Copy the code
Service workers
There is a problem that has plagued Web users for years – lost network connection. Although there are many technologies that try to solve this problem, such as offline pages, there is still no good coordination mechanism to control resource caching and custom network requests. Therefore, Service Workers come into being. Service Workers can give your application priority to access local cache resources, so that it can still provide basic functions in offline state before receiving more data through the network. It is a middleman between the server and the browser. If a service worker is registered in the website, it can intercept all requests of the current website and make judgment (script processing).