I started to change my job at the end of last year, and it has come to an end until now. I have interviewed many companies intermittently. In retrospect, DURING that time, I was torn by the interviewer’s hand and attacked by the written test questions.

In this article, I plan to make a summary of all kinds of interview questions I have met in job hunting (I will summarize after every interview) and interesting questions I have met in my review. The New Year is the peak period of job-hopping, which may help some friends.

Let’s talk about the difficulty of these questions first, most of them are basic questions, because this experience gives me the feeling that no matter you are interviewing for advanced or elementary level, basic knowledge will be asked, even some depth, so the basic is very important.

I will divide it into several articles according to the type:

Summary of the interview: javascript pilot summary (ten thousand words long)(completed)

Interview Summary: NodeJS pilot Summary (completed)

Summary of interview: Summary of browser-related pilot (completed)

Summary of interview: CSS pilot summary (completed)

Summary of interview: Summary of framework VUE and engineering related aspects pilot (completed)

Summary of interview: Summary of non-technical questions (completed)

I will seize the time to complete the unfinished summary ~

This article is a summary of browser-related topics, welcome friends to bookmark in the first look.

Let’s look at the table of contents

Let’s talk about caching

Of all the performance optimization, caching is the most important and the most direct and effective, after all, now are so busy, can not wait for the page to chrysanthemum.

Cache is divided into strong cache and negotiation cache, see the flow chart

Caching mechanism related fields are in the request and response headers

Strong cache

Strong cache, in the cache life, the client directly read the local resources.

The strong cache returns a status code of 200

Expires

Sat, 09 Jun 2018 08:13:56 GMT Sat, 09 Jun 2018 08:13:56 GMT Sat, 09 Jun 2018 08:13:56 GMT Sat, 09 Jun 2018 08:13:56 GMT It can also be added for compatibility.

Cache-control

Specify instructions to implement the caching mechanism. Multiple instructions are separated by commas. Common instructions are as follows, and the complete ones can be viewed by clicking MDN connection below

Max-age: indicates the validity period of the strong cache, in seconds. Max-age =30672000

No-cache: Uses cache negotiation to confirm with the server whether the returned response has been changed.

No-store: directly forbids the browser to cache data, each time the user requests the resource, a request will be sent to the server, each time the complete resource will be downloaded, which can be used to turn off the cache.

Public: Indicates that the response can be cached by any object (including the client that sent the request, the proxy server, and so on), even if it is not normally cacheable (for example, the response does not have a Max-age directive or Expires header).

Private: Indicates that the response can only be cached by a single user and not as a shared cache (that is, the proxy server cannot cache it). A private cache can cache the response content.

Strong cache

Strong cache, in the cache life, the client directly read the local resources.

The strong cache returns a status code of 200

Expires

Sat, 09 Jun 2018 08:13:56 GMT Sat, 09 Jun 2018 08:13:56 GMT Sat, 09 Jun 2018 08:13:56 GMT Sat, 09 Jun 2018 08:13:56 GMT It can also be added for compatibility.

Negotiate the cache

Negotiation cache, the key is negotiation, before using the local cache, you need to compare with the server, the server tells you that the resource is available, is the latest, so you can directly fetch the local resource, conversely, the server returns the latest resource to the client, the client receives the update of the local resource.

Status code:

  • If the local resource is up to date, return to 304.
  • If you need to obtain the latest resources from the server after the comparison, it is the normal 200
Last-modified If-Modified-Since

The last modification time of resources is used to judge, in unit of accuracy second

Last-modified: indicates the Last update of the server resource Tue, 14 Jan 2020 09:18:29 GMT

If-modified-since: indicates that the client initiates a negotiation and sends the local file update time to the server for comparison

This is a product of HTTP1.0, because the time accuracy is seconds, and file inconsistencies can occur if the file is updated less than seconds.

ETag If-None-Match

To solve the above problem, HTTP1.1 adds this set of tags

ETag: The server generates a unique string identifier based on the content

If-none-match: the client initiates a negotiation and sends the hash id of the local record to the server for comparison.

If both Last-Modified and ETag exist, ETag takes higher precedence.

Browser cache location

Two sources can be seen:

Memory cache: Reads data from memory

Disk Cache: Reads data from hard drives

Memory, of course, is faster to read than a hard drive, so why is there a hard drive?

Since memory browsers have limited memory, they have a mechanism for storing files in different locations depending on how small they are and how often they are used, depending on the browser vendor, but this is imperceptible to the user.

Reference documentation

Developer.mozilla.org/zh-CN/docs/…

Understand the PWA?

PWA (Progressive Web Apps) uses modern Web apis and traditional Progressive enhancement strategies to create cross-platform Web applications. (From MDN)

Take a look at the core technologies of PWA to see what its advantages are

App Shell

The App Shell architecture is a way to build Progressive Web apps that reliably and instantaneously load onto your user screen, similar to native applications.

This model contains the minimal resource files required for the interface, which, if cached offline, ensures that repeated visits are responsive, pages are rendered quickly, and the network only retrieves data.

Or to put it this way, App Shell is similar to native App, which can be launched locally without network.

ServiceWork

The core of PWA, it says that the cache can make the page load quickly, but only if there is a network, how to load the page without the network?

ServiceWork’s ability to persist offline caching can be achieved.

Service workers have the following functions and features:

  • An independent worker thread, independent of the current web process, has its own independent worker context

  • Once installed, it exists forever unless it is manually unregistered

  • You can wake up directly when you need it and sleep automatically when you don’t need it

  • Programmable interception of proxy requests and returns, cache files, cached files can be retrieved by web processes (including network offline state)

  • Offline content developers can control

  • Can push messages to the client

  • You cannot manipulate the DOM directly

  • It must work in HTTPS environment

  • Asynchronous implementations, mostly implemented internally through promises

Js is single-threaded, ServiceWork is threadindependent meaning that js execution is not blocked; Programmatically intercepts proxy requests and returns, and customizes file caching policies.

These features mean that developers have enough control over the cache to make it as elegant and efficient as possible

And then the core is how to design caching policies,

  1. Cache takes precedence. Query the cache first. If the cache exists, return directly
  2. The server has priority. The server queries the cache only when the server fails to query the cache
  3. Stable priority, first query cache, read, at the same time request server update resources

I recommend that you take a look at the open source WordBox package cache policy, the policy is much richer.

The code is not complex, mainly the declaration cycle, communication with JS threads, API calls, I will not post up.

Reference Documents:

Lavas.baidu.com/pwa/offline…

Developer.mozilla.org/zh-CN/docs/…

URL input to render process

  1. Domain name resolution, find the service address
  2. Build TCP connection, if there is HTTPS, then a layer of TLS handshake,
  3. Special response code processing 301 302
  4. Parse the document
  5. Build the DOM tree and csSCOM
  6. Generate render tree: Traverse each visible node starting from the root of the DOM tree. For each visible node, find the corresponding rules in the CSSOM tree and apply them to generate render tree by combining each visible node and its corresponding style
  7. Layout (backflow) : according to the generated rendering tree, backflow (Layout), get the set information of nodes
  8. Painting: Get the absolute pixels of the nodes based on the set information from the rendered tree and its backflow.
  9. Draw, display on the page, this step also involves drawing level, GPU-related knowledge points
  10. Load the JS script and parse the JS script

This is a general process, and the interviewer will pick out other points to follow up

Redraw and Reflux (rearrange)

First look at this diagram, the HTML document and the CSS rendering process

The page is drawn by streaming layout. From left to right, from top to bottom, if the spatial property of one node changes, it will affect the spatial layout of other nodes, and node information needs to be collected again for drawing, which is the process of backflow.

Repainting refers to manipulating the appearance of elements, such as colors, backgrounds, shadows, etc.

So backflow must trigger redraw.

Scenarios that trigger backflow

Get position information or modify geometry properties as follows:

  • Add or remove visible DOM elements
  • The position of the element changes
  • The size of the element changes (including margins, inner borders, border size, height, width, etc.)
  • Content changes, such as text changes or an image being replaced by another image of a different size.
  • When the page is first rendered (which is inevitable)
  • Browser window size changes (because backflow calculates element position and size based on viewport size)
  • Get location information because you need to backflow to compute the latest value
// Get attributes related to location information- offsetTop offsetLeft offsetWidth offsetHeight Offset from the parent container - scrollTop scrollLeft scrollWidth scrollHeight - clientTop clientLeft clientWidth Thickness of the clientHeight element's border - getComputedStyle() - getBoundingClientRectCopy the code

Optimization of reflux

Local or even global regenerating of trees can be very costly, so avoid triggering backflow too often

  • Modern browsers have been optimized for us to queue multiple backflow operations and then execute them in batches, with the exception of retrieving layout information, because to get real-time values, the browser must clear the queue and perform backflow immediately.
  • In terms of coding, avoid continuous multiple changes, can be triggered by merging changes
  • For a lot of different DOM changes, you can take them out of the document stream, for example using absolute positioning or display: None, and put them back in the document after the changes are made out of the document stream
  • Trigger frequency is controlled by throttling and anti-shaking
  • Css3 hardware acceleration, transform, opacity, and filters. After turning on, a new rendering layer will be created

How to enable GPU acceleration

When enabled, the DOM element is promoted as a separate rendering layer, and its changes no longer affect the layout of the document flow.

  • transform: translateZ(0)
  • opacity
  • filters
  • Will-change

Talk about what you know about HTTP

HTTP is an application layer protocol, hypertext Transfer protocol, built on TOP of TCP.

Is a one-way short link, currently http1.0 HTTP 1.1 http2.0

Http1.0: Each request from the client requires a separate connection, which is automatically released after the request is processed. Multiple requests can be processed in a single connection, and multiple requests can be overlapped, without waiting for the end of a request to send the next request

Interviewers ask this question because they are more interested in understanding TCP

tcp

TCP is a transport layer protocol. It is characterized by three handshakes and four waves.

The purpose of the three-way handshake is to prevent the invalid connection request packet segment from being sent to the server and causing errors. Therefore, a reliable connection must be established to send data

The three-way handshake establishes the connection process:

  1. The client sends bit code SYN=1, randomly generating packets to the server, and the server knows from SYN=1 that the client wants to set up online (client: I want to connect to you).
  2. Ack number=(client seQ +1), SYN =1, ACK =1, randomly generated packets (server: ok, you can connect)
  3. After receiving the packet, the client checks whether the ACK number is correct, that is, the SEQ number+1 sent for the first time and the bit code ACK is 1. If the ack number is correct, the client sends the ack number=(seQ +1 of the server), ACK =1, If the server receives the seQ value and ACK =1, the connection is established successfully. (Client: Ok, I’m coming)

Four waves of disconnection:

  1. The client sends a request to the server, requesting active disconnection, and enters the wait state, no longer sending data to the server, but receiving data (client: I am disconnecting).
  2. When the server receives the message, it tells the client that it is aware, that the server is in a waiting state, that it is no longer receiving data, but can continue to send data (server: OK, I know, but wait).
  3. After receiving the notification from the server, the client starts to wait for the next phase. (Client: OK, I’ll wait)
  4. When the server finishes sending the rest of the data, it tells the client to disconnect. The server does not receive or read the data (server: you can disconnect).
  5. The client receives it, tells the server it received it, and releases the link (client: OK, I broke the link)
  6. The server also releases the link when it receives it

UDP

The other transport layer protocol, UDP, is called user datagram protocol, a connectionless transport protocol.

UDP is a carrier of packets. Reliable links do not need to be established, and data reliability is not guaranteed because there are few protocol control items, simple packet headers, small packets, faster packets, and higher real-time performance. For example, UDP is used in teleconference and multimedia data flow scenarios

Introduce the HTTPS

HTTP packets are transmitted in plain text. You can view the packet content by capturing the packet. This exposes a security problem, and it is easy to be hijacked and tampered with.

To solve this problem, we have TLS, HTTPS = HTTP + TLS

TLS: Transport layer protocol, used to provide confidentiality and data integrity between two communication applications. It consists of two layers: THE TLS Record and the TLS Handshake.

TLS uses asymmetric encryption calculus to authenticate the identity of the communication party, and then exchanges the symmetric key as the Session key. Therefore, HTTPS is divided into two phases

  1. Asymmetric encryption and decryption is used to check whether the peer is valid. If yes, the session key is generated. (This step is the core)
  2. Before sending packets, the session key is symmetrically encrypted before transmission.

The TLS handshake

The steps are as follows:

  1. The client requests the server to establish an SSL link, and the server sends the client a random number randomC and a certificate issued by the CA.
  2. The client authenticates the certificate, generates a random number randomS, encrypts the randomS with the public key, and generates a signature with the randomS, which is sent to the server
  3. After receiving, the server decrypts the secret text with the private key, generates the signature with the decrypted key, and compares it with the signature transmitted by the client, and then generates a random number randomP, encrypts it with the private key, and generates the hash value of the random number, and sends it to the client.
  4. After the client decrypts with the public key and verifies the hash value, both ends use randomC randomS randomP to generate a session key through a certain algorithm. Subsequent packets will be transmitted through symmetric encryption of the Session key.

For the front end, after all, biased to the theory, so we suggest that according to the steps to draw a flow chart, more conducive to understanding memory.

The CA certificate

Above the first step when it comes to the CA certificate, if do not have a certificate to verify the link, then the public key in the transmission process is likely to be blocked by intermediaries, to a civet cats in prince Edward, the server’s public key to its own public key, returned to the client, so that is not the role of encryption, namely the middle attack.

Therefore, a verification mechanism is needed to ensure that the public key is from the server and has not been tampered with, and the CA certificate comes out.

CA certificate is a certificate issued by the CA institution, there is the key information, signature algorithm, signature hash algorithm, issuer, validity, and public key, fingerprint, the two algorithms is symmetric and asymmetric phase using the algorithm, public key is the server’s public key, at the time of application, enterprises need to upload the public key to CA organizations, The key is this fingerprint, which is generated by the CA by encrypting a signature with a private key.

So by verifying whether the certificate is valid, you can know whether the public key is tampered with. Then how to verify the validity?

Naturally, the fingerprint is through the certificate.

In the browser, and personal PC, with the top of the CA certificate and public key, so after browser access to the certificate, signed by both the built-in public key to decrypt the fingerprint for, then the browser also according to the same rules to generate a signature, comparison of the two signature verification through, then the certificate of public key is credible.

So does this completely eliminate man-in-the-middle attacks?

After all, the top CA certificates are built in, or is there a way, remember we use, we can use a Fiddle to grab HTTPS, is it a Fiddle in between?

The reason it works is because we install a certificate from a Fiddle on our phone before capturing a packet. The client trusts a certificate from a third-party source, so the client can parse the message from a Fiddle.

So as long as you don’t trust third-party certificates at will, there is basically no man-in-the-middle attack.

What happens when an options request is triggered

Options is typically used to initiate a precheck request before a cross-domain request to check whether the request is accepted by the server.

There are two types of cross-domain requests: simple request and precheck request. Simple request meets the following conditions:

  • The HTTP method used isGET POST HEAD
  • Is the content-typetext/plain mutipart/form-data application/x-www-form-urlencodeOne of the three
  • The request header can only contain this
-accept-accept-language-content-language-content-type -dpr - Downlink - save-data - viewport-width - DPR - Downlink - save-data - viewport-width -  WidthCopy the code

Requests other than simple ones trigger precheck requests first.

Common ones, such as using

  • The content-type is application/ XML or Text/XML POST request
  • Set custom headers, such as X-JSON, X-MengXIanhui, etc

In the header packet returned by the precheck request

Access-control-allow-origin: indicates the source of the request accepted by the server

Access-control-request-method: specifies the HTTP Method used by the server for actual requests

Access-control-request-headers: user-defined header field carried by the actual server Request.

The client decides whether to proceed with the cross-domain request based on the information obtained from the prechecked request.

Note: To send cookies in cross-domain requests, you need to set resp.setHeader(” access-Control-allow-credentials “,”true”). Set withCredentials to true on the client

Resources: cloud.tencent.com/developer/n…

What are the common HTTP headers

Look at the picture directly

Different content-Types of packet submissions

The Content-Type field tells you how the body of the message in the request is encoded

Content-type: application/json: json string Content-type: Application/X-www-form-urlencoded: & Concatenated key=value, jquery defaults to content-Type: multipart/form-data: often used for file uploads

About front-end security precautions

There are two main types of XSS AND CSRF

XSS

Cross-site scripting attacks, the attacker will inject a piece of executable code into the web page, such as links, input boxes, divided into persistent and temporary, persistent malicious code is stored in the database, will cause persistent attacks; Temporary is only available on the page currently being used;

The way to prevent this is to escape the content obtained on the web page.

CSRF

Cross-site request forgery, the construction of a phishing site, the use of the site to the browser’s trust, so as to deceive the user, initiate a request for malicious operations.

After the user logs in to the browser, the site trusts the browser, but the browser has no way of knowing whether the request is voluntarily initiated by the user. After the site trusts, the request initiated by the browser is trusted.

Then, when the user is logged in, the phishing site initiates cross-domain request, cross-domain label or form form, which will bring the user’s authentication information cookies, so as to forge the user identity for attack.

Prevention methods:

  1. The server validates the Referer, but some browsers may modify the Referer
  2. Random token, a token is generated for each page visited. The request on the page carries the token, and the server verifies the token. Note that this token cannot be stored in a cookie

XSS CSRF has a very detailed introduction on the web, but the core principle is relatively simple.

Understand the CORB

When I heard it for the first time, I was a little confused. Because it was CORS, I came back and checked the data to understand.

CORB is an algorithm that determines whether to block cross-site resource data from reaching the current site process before it reaches the page, reducing the risk of sensitive data exposure. Site isolation is an implementation mechanism that protects site resources against cross-domain tags.

When the MIME type of the data returned from the cross-domain request does not match the MIME type of the cross-domain tag, the browser will start CORB to protect the data from leakage. The only data type to be protected is HTML XML JSON.

MIME type

MIME is an Internet standard that extends the E-mail standard to support more message types. Common MIME types include text/ HTML text/plain image/ PNG application/javascript, which are used to identify the document type of the returned message. Type /subtype. In the HTTP request response header, the content-Type: application/javascript; Charset =UTF-8, MIME Type is part of the content-Type value

Cross-origin Blocking (CORB)

Cross-domain solutions

There are several mainstream ones

Utilize cross-domain tagsimage scriptInitiate a cross-domain request for the GET method

  1. Image tag implementation
var img = new Image;
img.onload = function() {
},
img.onerror = function() {
},
img.src = options.url;
Copy the code
  1. The script tag implements what is often called JSONP. Script will execute the returned string, so you can specify a parameter, the front-end parameter specifies a global method, after the server gets the global method, construct a string to execute the function, and put the message in the function parameters. The browser receives the messageapplication/javascriptTo trigger the preset callback function.
/* html */
let scr = document.createElement('script');
scr.src = ` http://127.0.0.1:3500/xx? callback=cb`
document.getElementsByTagName('head') [0].appendChild(scr)
function cb(res){
    console.log('into');
    console.log(res);
}

/* server */ 
let data = { name: 'xiaoli' }
var str = ctx.query.callback + '(' + JSON.stringify(data) + ') ';
// ctx.query = {callback:'cb'}
// str = 'cb({"name":"xiaoli"})'
ctx.body = str;
Copy the code

The reverse proxy

Nginx is commonly used as a reverse proxy, but the detailed configuration will not be discussed

CORS

That’s all on the server side, the three main parameters

Access-control-allow-origin: indicates the source of the request accepted by the server

Access-control-request-method: specifies the HTTP Method used by the server for actual requests

Access-control-request-headers: user-defined header field carried by the actual server Request.

The middle layer BFF does the conversion

If there is a BFF layer, a transition can be made in this layer, depending on whether the project architecture is available.

Is there any difference between setting cache in HTML meta and setting CACHE in HTTP header

The cache policy of the HTML meta setting is valid for the current document and is used to define the page cache.

Set meta tags to clear the page cache

History route and hash route

Hash routing

Hash routing is a scheme used to solve the problem of single-page routing before HTML5. Changes in hash will not trigger page rendering, and the server cannot obtain hash value. The front end can handle changes in hash value by listening hashchange event

window.addEventListener('hashchange'.function(){ 
    // Listen for hash changes, triggered by clicking the browser's forward and back
})
Copy the code

The history of routing

History routing, an HTML5 specification, provides operations on the contents of the history stack. Common apis include:

window.history.pushState(state, title, url) 
// let currentState = history.state; Get current state
// state: Data that needs to be saved. This data can be retrieved in event.state when the popState event is triggered
// title: null
// url: Set the new history URL. The origin of the new URL must be the same as the origin of the current URL, otherwise an error will be thrown. A URL can be an absolute path or a relative path.
/ / if the current url is https://www.baidu.com/a/, execution history. PushState (null, null, '/'/qq), as https://www.baidu.com/a/qq/,
/ / execution history. PushState (null, null, '/ qq/'), they become https://www.baidu.com/qq/

window.history.replaceState(state, title, url)
// Basically the same as pushState, except that it modifies the current history while pushState creates a new history

window.addEventListener("popstate".function() {
    // Listen for browser forward and backward events. PushState and replaceState methods do not trigger
});
Copy the code

Several ways to bind js events

  • Bind directly in the DOM element,<div class="an" onclick="aa()">aaaa</div>
  • Binding in jsDocument. The getElementById (" demo "). &western nclick = function () {}
  • Adding listening Eventsdocument.addEventListener('name',()=>{})

What is event delegation

Event triggering in the browser has three stages:

  1. The event capture phase starts from the outer layer and propagates inward
  2. The event reaches the target node, the target phase
  3. Return from the target stage to the outer layer, the bubbling stage

Event delegate is also called event broker. In DOM nodes, events of child nodes can be captured by the parent node because of the event bubbling mechanism.

Therefore, in appropriate scenarios, events of child nodes are handled with parent node listener, supporting event click event mouse event listener.

Advantages of event broker:

  1. You can reduce the number of listeners and reduce memory footprint
  2. Event listening can be implemented for dynamically added child nodes

Understanding event capture and event bubbling:

Event capture: Events are propagated from the outside in. The last parameter of addEventListener is set to true to catch events. The default is false. Capture is the logic by which a computer processes input

Bubbling: Events spread from the inside out, bubbling is the human mind that makes sense of events.

Target is different from currentTarget

Target: Refers to the target phase of the event flow, and retrieves the clicked element.

CurrentTarget: Refers to the current event activity object in the capture and bubble phases of the event stream, and is equal only in the target phase

CSS loading problems

According to the page rendering process:

  1. CSS loading does not block DOM tree parsing;
  2. CSS loading blocks DOM tree rendering;
  3. CSS loading blocks the execution of subsequent JS statements

Introduce prefetch/preload Async /defer

prefetch preload

Both tell the browser to load files (images, videos, JS, CSS, etc.) ahead of time, but the execution is different.

Prefetch: It uses browser idle time to download or prefetch documents that the user may access in the near future.

Preload: Indicates which resources are needed immediately after the page is loaded. The browser preloads before the main rendering mechanism kicks in. This mechanism allows resources to be loaded and available earlier and is less likely to block the initial rendering of the page, thus improving performance.

Audio: indicates an audio file. Document: An HTML document that will be embedded inside <frame> or <iframe>. Embed: A resource to be embedded inside the <embed> element. Fetch: Resources that will be fetched by FETCH and XHR requests, such as an ArrayBuffer or JSON file. Font: font file. Image: indicates an image file. Object: a file that will be embedded within the <embed> element. Script: JavaScript file. Style: style sheet. Track: WebVTT file. Worker: A JavaScript Web worker or shared worker. Video: A video file.Copy the code

Differences between JS Async and defer

Used for js script preloading

Async: Scripts are loaded and subsequent document elements are rendered in parallel. After the script is loaded, HTML parsing is suspended and JS scripts are parsed immediately

Defer: Loading the script and rendering subsequent document elements take place in parallel, but execution of the script will wait until the HTML parsing is complete

References:

Developer.mozilla.org/zh-CN/docs/…

Developer.mozilla.org/zh-CN/docs/…

Introduce the viewport

<meta name="viewport" content="width=500, initial-scale=1">

There are only two properties specified here, width and scale. In fact, viewPort has more control. It can represent all of the following properties:

  • Width: indicates the page width. The value can be a specific number or device-width, indicating that it is the same as the device width.
  • Height: indicates the page height. The value can be a specific number or device-height, indicating that it is the same as the device height.
  • Initial-scale: indicates the initial scaling scale.
  • Minimum-scale: indicates the minimum scale.
  • Maximum-scale: indicates the maximum scaling ratio.
  • User-scalable: Whether to allow users to scale.

The reason for 300ms delay on mobile terminal? How to deal with it?

In the past, double-click on the mobile end could zoom or slide, so a delay of 300ms was added to distinguish between click and double-click.

Solution:

  • CSS touch-action touch-action is set to auto and set to None to remove the 300 ms delay for the target element. Disadvantages: New property, possible browser compatibility issues
  • Use TouchStart and TouchEnd to simulate click events, with the disadvantage of click penetration
  • The fastClick principle: when the Touchend event is detected, the DOM custom event will immediately simulate a click event, and the browser after 300ms to block the real click event
  • On all versions of Android Chrome, a user-scalable viewport meta is set to no, and the browser will immediately initiate a click event.

Web Worker

(Fill in later)

Browser Performance Monitoring

Most performance related data can be captured using the Performance. Timing API

  • navigationStart: Timestamp of the previous page (not necessarily the same domain as the current page) unload in the same browser context, equal to the fetchStart value if there was no previous page unload
  • unloadEventStart: Timestamp of previous page unload (same domain as the current page), 0 if there is no previous page unload or if the previous page has a different domain from the current page
  • redirectStart: The time when the first HTTP redirect occurs. The value is 0 only when there is a redirect within the same domain name
  • redirectEnd: The time when the last HTTP redirect is complete. The value is 0 only when there is a redirect within the same domain name

The current page starts to load

  • fetchStart: The time when the browser is ready to grab the document using an HTTP request, before checking the local cache

DNS TCP during network transmission

  • domainLookupStart: Indicates the start time of DNS domain name query. If local cache (no DNS query) or persistent connection is used, the value is the same as the fetchStart value
  • domainLookupEnd: Time when DNS domain name query is complete. If local cache is used (no DNS query is performed) or persistent connection is used, the value is the same as the value of fetchStart
  • connectStart: The time when the HTTP (TCP) connection is started. If the connection is persistent, this value is equal to the fetchStart value. If an error occurs at the transport layer and the connection is re-established, this is the time when the newly established connection is started
  • secureConnectionStart: Time when the HTTPS connection starts. If the connection is not secure, the value is 0
  • connectEnd:HTTP (TCP) time to complete connection establishment (complete handshake), equal to fetchStart value if it is a persistent connection. If an error occurs at the transport layer and the connection is re-established, this displays the time when the newly established connection is completed

Document reading stage

  • requestStart: The time when the HTTP request started reading the real document (the connection was completed), including reading from the local cache, and the time when the new connection was established if the connection error was reconnected
  • responseStart: The time when THE HTTP starts receiving the response (the first byte is fetched), including reading from the local cache
  • responseEnd: The time when the HTTP response is fully received (fetched to the last byte), including reading from the local cache

Parsing document phase

  • domLoading: Starts parsing the time to render the DOM tree, when document. readyState becomes loading and the readyStatechange event is thrown
  • domInteractive: The time when parsing the DOM tree is complete, document. readyState becomes interactive, and readyStatechange events are thrown
  • domContentLoadedEventStart: The time when resources in the web page start to load after DOM parsing is completed, which represents the time node triggered by DOMContentLoaded event
  • domContentLoadedEventEnd: After DOM parsing is completed, the time when resources in the webpage are loaded (such as JS scripts are loaded and executed), and the end time of DOMContentLoaded event of the document, which is also the domReady time in jQuery;
  • domCompleteWhen the DOM tree is parsed and the resource is ready, document. readyState becomes complete and the readyStatechange event is thrown
  • loadEventStartThe :load event is sent to the document, which is the time the LOAD callback starts executing, and has a value of 0 if no load event is bound
  • loadEventEnd: The time when the callback of the load event completes. If no load event is bound, the value is 0

Query the time range of each phase

DNS query time = domainLookupEnd – domainLookupStart

TCP connection duration = connectEnd – connectStart

Request Request time = responseEnd – responseStart

Dom tree parsing time = domComplete – domInteractive

White screen time = domloadng – fetchStart

Domready time = domContentLoadedEventEnd – fetchStart

Onload time = loadEventEnd – fetchStart

Name a few of the h5’s new features

This one is a little boring, but it’s here

  • Added semantic labels
  • Added APIS and local storage
  • CSS border, background, animation

What events are available for continuous Chinese input

The input box can use the following two listening events (first known to exist) when typing continuous Chinese:

Compositionstart: The event is triggered before the entry of a piece of text (similar to the keyDown event, but only before the entry of several visible characters that may require a series of keyboard actions, speech recognition, or clicking on an input method alternative).

Compositionend: When the composition of a paragraph of text is complete or cancelled, the event is fired (fired with special characters that require a series of keys and other inputs, such as speech recognition or word suggestions on the move).