1. Browser cache strategy (negotiated cache vs. strong cache)

  • Cache policy:

    • 1. When accessing a resource, the browser first checks whether there is a current cached resource in the local cache resource. If there is no cached resource, the browser sends a request to the server to obtain the resource (after obtaining the resource, the resource will be cached).
    • 2. Check whether a strong Cache is matched: If the resource exists in the local Cache, the system checks the header information (Expires and Cache-control fields) in the local Cache to determine whether the strong Cache is matched. If a strong Cache is matched, the system directly uses the local Cache and does not send requests to the server.
    • 3, check whether the negotiation cache matches: If the strong cache is not matched, the server sends a request to the server with the negotiation cache fields (if-modify-since and if-none-match) in the response header of the local cache resource. The server determines whether the negotiation cache is matched based on the negotiation cache fields. If the negotiation cache is matched, 304 is returned to inform the browser that the current resource is not updated. Use local caching. The browser receives this information using a local cache.
    • 4. If the negotiation cache still fails, the browser loads the resource data directly from the server.
  • Strong cache: We know that strong caching is when the browser checks the current Cache header (Expires and cache-Control fields) and decides whether to use the current Cache or continue to send requests to the server based on the header. So the key to strong caching is to validate the Expires field against the cache-Control field.

    • Expires: Expires. When the server returns a browser resource, the browser caches that resource. The response header of the resource will have the Expires field set by the server. The field is a timestamp, meaning that the resource is valid until this time. If the browser needs to request the resource again, it can directly retrieve the resource from the local cache for use within the Expires time.So when the browser checks for a strong cache hit, it’s actually checking to see if the current system time is before Expires
      • Expires cons: Expires depends on local timestamps, which can be problematic if the local timestamp is inaccurate.
    • Cache-control: A caching Control, like Expires, that the browser caches when the server returns a browser resource. The response header for this resource has a cache-control field set by the server. This field contains several different values that represent different Cache behaviors.
      • Cache-control: max-age = x: For example, max-age = 1000 means that the cache will expire after 1000 seconds. If the browser wishes to obtain the resource again within 1000 seconds, the browser will not send a request but obtains the resource from the local cache. If the request does not occur within 1000 seconds, the browser will fail to match the strong cache and negotiate the cache later
      • The cache-control: no - the store: Indicates that the cache is disabled. The latest data needs to be obtained each time. Therefore, after the strong cache field is matched, the browser sends a request to the server to obtain the resource.
      • The cache-control: no - Cache: Indicates that the negotiation cache is required. Therefore, if the strong cache is matched, the subsequent negotiation cache process is performed
      • Cache-control: public: Indicates that it can be cached by all users (such as end users and intermediate proxy servers, such as CDN).
      • Cache-control: private: Can only be cached by end users.
    • Expires and cache-control can be used at the same time. Cache-control takes precedence.
  • Negotiation cache: After understanding the cache policy, we know that negotiated cache is implemented after strong cache failure. The browser will send a request to the server with the negotiated cache fields (if-modify-since and if-none-match fields) in the response header of the local cache resource. The server checks whether the browser uses the local cache resource based on the negotiation cache field carried in the request header. If yes, 304 is returned; otherwise, the latest current resource is returned.

    • If-modify-since/last-modify: When the server returns a browser resource, the browser caches the resource. The last-modify field set by the server is displayed in the response header of the resource. The value indicates the Last modification time of the resource on the current server. When the negotiation cache is performed, the browser will add the if-modify-since field in the request header. The if-modify-since field is the value of last-modify. When the server receives the request from the negotiation cache, it will obtain the if-modify-since field in the request header. Compare it with the last modification time of the current resource to determine whether it is hit.For example, if-modify-since in the request header is 2021-1-1. The server compares the last update time of the current resource to 2021-1-2, indicating that the resource has been modified. If a match is made, the server returns the 304 status code. The browser receives the 304 status code and uses the cached resource directly. If the negotiation cache fails, the server directly returns the latest current resource to the browser.
      • If the Modify – Since faults:
        • Last-modify does not change if the resource changes within a very short period of time (within one second, for example if the interval is less than one second), because the value is accurate only to seconds.
        • If the resource has changed, but then changed back, the resource has not changed, you can use caching, but last-modify does not think so, return the resource.
  • If-none-match /Etag: If the server returns a browser resource, the browser caches the resource. The response header for the resource has an Etag field set by the server. This value is encoded based on the file content, meaning that the Etag value is the same as long as the file content is the same. When we negotiate the cache, the browser will join in the request header If – None – Macth fields, the If – None – Match value is the value of the Etag, cache server receives the consultation request, access to the request header If – None – Match, compared to the current file Etag, back to 304, Different servers return the resource directly

  • If-modify-since and if-none-match can be used at the same time. The server validates if-none-match first, and then compares if-modify-since with Etag.

  • Heuristic caching: When there is no Expires/ cache-control :max-age/ cache-Control :s-maxage in the strong Cache response header, but there is a last-modify negotiation Cache field, there is a heuristic Cache time. That is, the value of the response header (Date value – last-modify value) * 10%, and the data will be obtained from the cache again within this time. After this time, the negotiation cache will be used.

  • Cache location:

    • From memory cache: The cache is stored in the memory. The read and write speed is fast. The cache disappears after the browser is closed.
    • From disk cache: The cache is stored in the hard disk. The read/write speed is slow. The process does not disappear after the browser is closed
    • Read cache sequence: memory search first, then hard disk search
    • Cache storage: Large files are preferentially stored in hard disks, small files are preferentially stored in the memory, and disks with high memory usage are preferentially stored in the memory

2. XSS attack and prevention

  • XSS (Cross-site Scripting) : A cross-site scripting attack, in which the attacker injects executable codes (mainly malicious codes) into web pages by all means. When the web page is opened, these malicious codes (such as obtaining users’ cookies and tokens) will be executed. There are three types of XSS: storage XSS, reflection XSS, and DOM XSS.

    • Storage XSS:

      • 1. The attacker submits malicious code to the website database
      • 2. The user opens the target website, and the server takes out the malicious code from the database, splices it into HTML and returns it to the user
      • 3, the browser responds to parse the HTML, malicious code is executed, and the user is fooled.
      • Stored XSS example: For example, the reply from the forum contains malicious code, and the server does not verify the reply and stores it directly into the database. When people visit the reply list page, the reply resources come from the database and contain malicious code, leading to users being targeted.
    • Reflective XSS:

      • 1. The attacker builds a special URL where the URL parameters contain malicious code
      • 2. The user is induced to visit the URL, and the website server takes out the malicious code from the URL, splashes it into HTML and returns it to the user.
      • 3, the browser responds to parse the HTML, malicious code is executed, and the user is fooled.
      • Reflection TYPE XSS example: For example, if you enter malicious code in the search box below and click Baidu, the server does not process the malicious code and the returned data is directly combined with the search content. The browser will be taken in when it receives and parses the returned data.

    • XSS DOM type:

      • 1. An attacker builds a special URL that contains malicious code
      • 2. The user is induced to access the URL, and the front end obtains the malicious code in the URL and executes it. For example, the url parameter is taken out and displayed on the page using innerHtml.
      • DOM XSS example: For example, the input content of the input box of the front-end operation page will be put into THE URL parameter and the current page will get the URL parameter display input box content. At this time, the attacker only needs to construct the URL containing malicious code to induce users to click.
  • XSS prevention:

    • Input check: Limit the characters allowed to be entered in different scenarios. For example, limit the phone number to 11 digits to avoid the possibility of entering JS scripts in the input box. Of course, if the attacker directly bypasses the front-end filtering, it makes little sense to forge the request input check.
    • Output check: strict output coding, HTML coding, JS coding, CSS coding, URL coding and so on. Such as the& < > "' /Escape out. <div><</div>= ><div>&lt; </div>, and what the browser says is&lt;Rather than<.
    • Set content-security-policy (CSP) HTTP header: Use the self-created whitelist to increase the difficulty of XSS attacks. The browser can intercept XSS attacks. The principle is to set up a trusted domain name, only trusted domain script resources can be requested to execute (such as a.com was injected<script src = 'http://danger.com'></script>, thenhttp://danger.comThe resource will not be loaded if it is not in the CSP whitelist. .
    • Cookie set httpOnly property:response.setHeader('Set-Cookie', 'token=xx; HttpOnly');This property prevents JS from reading cookies.
    • Verification code operation to prevent scripts posing as users to submit dangerous operations.

3. CSRF attack and prevention

  • Cross Site Request Forgery (CSRF) : cross-site request forgery. An attacker sends forged requests to a server disguised as a user with the victim’s cookie.
    • 1. Attackers lure users into third-party websites
    • 2. Send a request to the attacked website in the third party website
    • 3. Use the user’s registration credentials previously obtained on the attacked website to bypass the website’s verification of the request, so as to pretend to be a user for a certain operation
  • Prevention:
    • Set the SameSite property on the Cookie of the response header
      • SameSite:Strict: Cookies cannot be sent across sites. They can only be sent on the same site
      • SameSite:Lax: Cookies can be sent across sites, but only as GET requests
    • Verify Referer: The server verifies the Referer (the source URL of the current request from the server, including the protocol domain name port and parameters) or Origin (the source URL of the current request from the server, including the protocol domain name port) in the request header to determine whether the request is initiated by a third-party website. (Refer messages can be faked to be insecure)
    • Use token: The server issues the token to the client, the client saves the token, and the server verifies the token after each request. (CRSF only relies on the browser to carry the cookie, and can not take out the token and put the token in the request header.)

3. Man-in-the-middle attack and attack means

  • Man-in-the-middle attack (MITM:MAN IN THE MIDDLE) : When data leaves one endpoint to go to another endpoint, THE data transmission process is THE process of losing control of THE data. IN this case, when a middleman locates between THE two endpoints to hijack and forward THE data, it is called a man-in-the-middle attack.
  • Means of attack
    • When the HTTPS: When we use HTTP for data transmission, because the data is clear, at this time if the DNS were hijacked, client data sent to the broker on the server, the middleman continue forwarding data to the target server for the client, in the same way, the target server’s data back to the broker server, middlemen continue to forward the target server’s data to the client, The client and target server are not aware of the fact that the data has been hijacked by middlemen resulting in data leakage.
    • SSL hijacked: SSL hijacking refers to SSL certificate spoofing. When HTTPS is used, data is still transmitted in plaintext because SSL requires handshake processing. If the middleman accesses the client and the target server, forges the certificate from the target server during transmission and sends the forged certificate to the client. If the client ignores the error and continues to send data, the client will use the public key in the forged certificate to encrypt the third random number of SSL protocol, and the middleman will parse the third random number sent by the client through the private key. The communication information between the client and the target server can be obtained by adding the random number of the client and the server sent in plaintext during transmission and the symmetric encryption key of the client and the target server after the encryption method specified by the client and the target server is generated.
    • SSL stripping: In this method, the middleman replaces the HTTPS sample text returned by the server with HTTP and returns it to the client. The middleman communicates with the server using HTTPS, so that the client’s HTTP plaintext is obtained by the middleman. Meanwhile, the middleman replaces the HTTP plaintext with HTTPS and sends it to the server, disguised as client communication. As a result, the communication between client and server is hijacked by middlemen.
    • The public key of a CA certificate is resolved by an intermediary: Actually middlemen parsing out public key is okay, because public key to be sent to the client’s hands at last and last information the client using the public key encryption, go back into the hands of the middleman, only the public key middlemen hands, selfless key, so unable to resolve the encrypted information, so the CA certificate even hijacked the middleman, can use the public key.

4, HSTS (Prevent 301/302 Redirected hijacking)

  • Problems solved by HSTS:
    • 1. Some sites have HTTPS enabled, but most users enter domain names without a protocol, such as direct inputbaidu.comAccessing Baidu Server
    • 2, then the browser at this time according to the firsthttp://www.baidu.comFor a visit
    • 3. Baidu server found that it was an HTTP request, which was not secure, so it responded with a 302 redirect response to redirect the client tohttps://www.baidu.com
    • 4. The client receives the redirection response message and redirects tohttps://www.baidu.com

    However, there is a problem with the above process, such as redirects hijacked to phishing sites, or manin-the-middle SSL stripping attacks, so to solve this problem, our best solution is to send the HTTPS request from the client in the first place, so how to do this, then needs the help of HSTS.

  • HSTS: HTTP Secrect Transport Security is a Web Security protocol that forces clients to send HTTPS requests (for example, I enter Baidu.com in the address bar of the browser. If I use HSTS, I will force the HTTPS request to be sent on the browser sidehttps://www.baidu.com)
  • HSTS implementation: Previous HSTS problem solving step 4:4. The client receives the redirection response message and redirects tohttps://www.baidu.comIn this step, the client redirects tohttps://www.baidu.com, the server responds to the request, and the request response header containsStrict-Transport-SecurityField, which tells the client that you need to use HTTPS for every subsequent request. When the next client accessbaidu.comIt will redirect to 307 directly inside the browserhttps://www.baidu.com.
  • Strict-Transport-SecurityResponse header details:Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
    • Max-age: Specifies the HSTS expiration time. This parameter is in seconds. In this case, the HSTS expiration time is one year.
    • IncludeSubDomains: Optional, indicating that the current domain name and its subdomains are enabled with HSTS protection
    • Preload: Optional parameter to be added each time the current server adds its address to the HSTS preload list. (Later)
  • HSTS pitfalls: HSTS cannot resolve the HTTP request on the first access, only requiring the client to use HTTPS requests for every access after the first.
  • Resolve the HSTS bug: the browser always checks a browser-maintained one before making a requestHSTS preload listAs long as the requested address is in the list, we force the browser to use HTTPS requests, so we just need to add the server address to the HSTS preload list to solve the HSTS defects. When we applied to the HSTS Preload List website to add the server address to the HSTS Preload List, one of the requirements was to add the server address to the HTTP response headerStrict-Transport-Security, and must include includesSubDomains and preload parameters.
  • HSTS advantages
    • 1. Enforce secure connections using HTTPS to prevent redirected hijacking and man-in-the-middle SSL stripping attacks.
    • 2, omit the previous HTTP access server, server response, client redirection steps, save time and resources.
    • 3. When the browser without HSTS displays a certificate error, users can choose to ignore the warning and continue to visit. When the browser with HSTS displays a certificate error, there is no target page link entry, preventing middlemen from using SSL hijacking attacks.

5. Introduce the HTTP2 features

  • 1. Binary framing: HTTP2 divides the original HTTP message into smaller frames and binary encodes them. For example, the header information in HTTP1 is now in the header frame, and the body information is in the Data frame. (Header or data can also be stored in multiple frames, such as header: Header frame 1+ Header frame 2…)

    • A frame consists of a frame header and a frame payload
    • Frame Header field Meaning
      • Length field: 24 bits (3 bytes) : identifies the frame content size
      • Type field: 8BITE, which identifies the current header frame, data frame, or other frame
      • Flags field: 8BITE, for the specific frame type
      • R field: Frame reserved bits, fixed value 0x0 (hexadecimal 0)
      • Stream Identifier: Identifies which Stream the current frame belongs to
    • Flow: A flow represents a complete request-response data interaction process. A flow has the following characteristics.
      • Bidirectional: Data can be sent and received within the same stream
      • Orderliness: The data transmitted in the stream is the binary frame described above. Frames are received and sent sequentially
      • Parallelism: Although the frames are sequential, they are transmitted in parallel and there is no need to wait sequentially because each frame is numbered sequentially and merged sequentially to avoid confusion.
      • Streams can be created or closed unilaterally by the client/server
    • The relationship between streams and frames: Previously, packets were divided into binary encoded frames (header frames, data frames, etc.). Frames are units of data in the stream, and these frames are sent and transmitted in parallel in the stream.
  • 2. Multiplexing: All requests in HTTP2 share a single TCP connection. Specifically, the client sends multiple requests to the server, and these requests are divided into multiple frames for parallel transmission in the TCP link. When the request arrives at the server, because the same request is in the same Stream, the complete request can be obtained by recombination and parsing according to the Stream Identifier field in the frame header.

    • Multiplexing problem: http1.0 in each HTTP request will create a TCP connection, means that the need for three-way handshake resources, waste a lot of time and the browser will limit the number of multiple concurrent requests under the same domain name, so before the general optimization means the number of concurrent requests is limit resource distribution under different domain name to break through the browser.
  • 3. Header compression: HTTP2 proposes hpack compression to reduce the amount of header data per request.

    • Hpack:

      • Maintain the same static table on both the client and server sides, like this:When we want to send the header:method:GET“, you only need to send the corresponding index value (2 in this case). When receiving the index value, you need to search for the corresponding Header name and header value in the static table.
      • To maintain a dynamic table that is initially empty, you can add content dynamically: Because some header fields are indeterminate, there may be no header value in the table. If this field is found, HPACK will handle it in the following way.
        • 1. The header containsThe user-agent: Mozilla / 5.0 XXX
        • 2. Since there is no corresponding value for user-Agent in the static table, we need to perform Huffman coding for the current user-Agent valueHuffman (Mozilla / 5.0 XXX)
        • 3. Then find the index bit 58 corresponding to user-Agent in the static table
        • 4, 58 withHuffman (Mozilla / 5.0 XXX)Send it all togetherThe user-agent: Mozilla / 5.0 XXX= >58: Huffman Mozilla / 5.0 (XXX)
        • 5. The peer party receives the packet58: Huffman Mozilla / 5.0 (XXX)The data is appended to the static table like this:

        This new correspondence is called the dynamic table, and the sender also maintains a dynamic table so that it only needs to send 62 if it wants to send the field again

    • Header compression: Http1. x requests carry header data every time, but many request data is basically unchanged, such as user-agent, accept-language, and so on, so that this information becomes a waste of each request.

  • 4. Server-side push: Server-side push can not only respond to the original request but also push other resources to the client (the pushed resources must comply with the same origin policy). This eliminates the need for clients to request every resource, reducing resource request response time.

    • Here’s an example: HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / HTTP: / / http2

6, how does HTTP2 ensure that file transfers are error-free

The Stream Identifier field in the frame header helps us determine which Stream the current frame belongs to. A Stream belongs to a set of request-reply data interactions. The Stream Identifier field in the frame header helps us determine which Stream the current frame belongs to. Therefore, the receiver can obtain the original complete request by combining the frames according to the Stream Identifier of the frame header.

7. Enumerate and explain all HTTP request methods

  • GET: to make a request for a specific resource (lookup)
  • POST: Submit data to specified resource (increment)
  • DELETE: Deletes the specified resource.
  • PUT: Updates data (more), such as updating a field in a table. Other unupdated fields are carried in addition to the updated fields
  • OPTIONS: Return HTTP request methods supported by the server (such as CORS cross-domain precheck request)
  • HEAD: Similar to get, but returns only a response header, mainly used to test connections.
  • TRACE: Returns the request received by the server, mainly for diagnostic purposes
  • CONNECT: reserved in HTTP1 for proxy servers that can change the connection mode to pipe mode
  • PATCH: A supplement to the PUT method. For example, to update a field in a table, you need to pass in the field to update its value

8. What does the browser do when you click a button

  • Create the current event instance
  • 2. The current event is placed in the event queue through an event loop
  • 3. The synchronization task is complete
  • 4. Event capture phase (from dom top level to target node)
  • 5. Target stage (execute event, trigger default behavior if there is one)
  • 6. Bubbling stage (bubbling from the target node to the dom top layer)

9. List the meanings of the HTTP status codes: 20x,30x,40x, and 50x

  • 20x: Requests are processed
  • 30x: redirects requests
    • 301: The resource is permanently moved to the new URL, the browser automatically directs to the new URL, and any future requests will be replaced with the new URL
    • 302: The resource is temporarily moved, and the client still uses the original URL
    • 304: The resource is not modified, and the client receives data from the cache
  • 40x: Request error
    • 403: The server understands the client request but refuses to execute it
  • 50x: Server error

Multiplexing in HTTP2 vs. long connection multiplexing in HTTP1

  • Http1: Connection: keep-live: http1: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live: keep-live
  • Http2 multiplexing: an HTTP2 TCP connection can have multiple streams, i.e., multiple request response data interaction process. The server uses the Stream Identifier flag in the frame header to determine which Stream the current frame belongs to and restore it to the full request.
  • The difference between:
    • Http1.1 long connection problems is although a TCP connection can send multiple requests, but a single TCP can only handle only one request at a time, this will cause thread blocking problem, namely multiple requests in a request to spend a lot of time, so subsequent requests need to be processed, waiting for the request to continue the response. The reason for this is that HTTP1.1 is a text protocol. If multiple requests are processed and multiple responses are returned at the same time, the responses do not know which requests correspond to, so they must be executed sequentially.
    • Http2 multiplexing does not exist header blocking problem, because in THE HTTP2 TCP connection, each request response process corresponds to a flow, and there are multiple flows, the flow of the request response is divided into many frames for parallel transmission, the server in the received parallel transmission over the frame, The Stream Identifier determines which Stream the current frame belongs to, and then assembles it into a complete request. Even if the frame in Stream A is sent first and the frame in Stream B arrives first, we can process the request in Stream B first and respond back in Stream B. The browser will know the corresponding request in Stream B when it receives the response in Stream B. Compared to http1.1’s long connections and 1.0’s requirement to establish a TCP connection for every request, http2 multiplexing greatly improves transport performance.

11. What constitutes an HTTP request

  • Request line: contains request method, URL, and protocol version:POST/article/news. HTTP / 1.1 HTML
  • The request header is in the form of key: value, which describes some information about the current request. For example:
    • The Content - Length: 112: indicates the length of the request message body 112
    • If the Modify - Since: XXX: The negotiated cache expires at XXX time
    • Host:localhost:8080: Indicates the name of the desired host localhost:8080
    • Referer:http://www.BCD.com?code=1: tells the server that the requested resource is fromhttp://www.BCD.com?code=1
  • 3. The content of the request to be sent:name=tom&age=18
  • Generally speaking, HTTP request packets are shown as follows

12. Introduce the five-tier network model

  • Application layer: A place where an application sends a connection request to the transport layer based on the different protocols that the application complies with. For example, a browser is an APPLICATION that complies with HTTP.

    • Application layer protocols include DNS (Domain name System), HTTP (Hypertext Transfer Protocol), FTP (file transfer protocol), TFTP (Simple file Transfer Protocol), and TELNET (remote terminal protocol). *
  • Transport layer: establish connections for applications running on different hosts. TCP establishes reliable connections, UDP establishes fast connections, distinguishes processes by port numbers, and adds TCP headers and UDP headers on the basis of application layer data.

    • TCP features: TCP is a connection-oriented transport layer protocol. It is secure and reliable. It is word-throttle-oriented and uses full-duplex communication, that is, the client and server can send response data at the same time.
    • UDP features: UDP is a connectionless transport layer protocol. It is unreliable and datagram oriented. However, it has a low header overhead and supports one-to-one, one-to-many, many-to-many, and many-to-one data transmission
  • Network layer: responsible for adding transport layer data to the message header, including the SOURCE and destination IP addresses, then the data is called packets.

    • The network is composed of countless subnets (local area network), the message is sent to the local area network in the form of broadcast, the local area network each computer receives the message for comparison and discard, in order to distinguish whether the target computer is in the same subnet, then need IP protocol assistance.
    • An IP address defined by the IP protocol consists of 32-bit binary addresses (0.0.0.0 to 255.255.255.255). The first part of an IP address represents a network, and the last part represents a host. For example, if the network part is the same, the two computers are in the same subnet. For example, 178.1.2.1 and 178.1.3.2 are divided into 178.1, so they are in the same subnet. But how do we know what part of the network is, and that’s when we introduce a subnet mask to see if it’s in the same subnet.
    • The subnet mask is also a 32-bit binary number like the IP address, but the network part is set as 1, and the host part as 0. Therefore, if the subnet mask is 178.8.0.0, you only need to calculate and with the source IP address and the target IP address, and compare the calculation results to know whether the subnet is in the same subnet.
    • Data transmission depends on the MAC address (physical address is equipment), the corresponding MAC address, IP address of each computer has its own unique MAC address, so we communicate also need to know the target computer’s IP address into a MAC address, MAC address, how to obtain the target computer at this time we will use the ARP protocol
    • ARP also broadcasts packets to each computer on a subnet. The packets contain the IP address of the target computer. If the packets are compared with each other, the computers discard their own IP addresses or return their MAC addresses. (Both sending data and asking for THE MAC address are broadcast. A special MAC address is added to the MAC address of the data packet to distinguish the usage, so that the receiver knows whether to send data or ask for the MAC address.)
    • If the two computers are not on the same subnet, the packets are sent to the gateway for forwarding.
  • Data link layer: According to the Ethernet protocol, the network layer packet is divided into frames. The frame header contains the SOURCE and receiver MAC addresses. Frames are sent to the physical layer as a bit stream.

    • Ethernet protocol: Because if you transmit a bunch of0101010101101010101The data computers don’t know what it means either, so you need an Ethernet protocol to group the 01 signals, specifying each set of meanings that both computers can recognize.
  • Physical layer: Devices are connected by physical means to process the bit streams at the data link layer into signals.

DNS and THE DNS resolution process

  • DNS: The Domain name System (DNS) is an Internet service that resolves specific domain names into IP addresses. In essence, it is a distributed database that stores the mapping between domain names and IP addresses.
  • DNS resolution: Enter in the browserwww.newsTV.comFor example.
    • 1. First look in the DNS cachewww.newsTV.comTo obtain the corresponding IP address. (The following three steps are equal to client DNS cache)
      • 1.1, the browser DNS cache lookup
      • 1.2, the operating system host file DNS cache lookup
      • 1.3. Enter the router DNS cache for search
    • 2, DNS cache not found, browser passedDNS client of the operating systemtoLocal DNS ServerQuery, the local DNS server will carry the domain namewww.newsTV.comtoRoot DNS serverDo a lookup. (The local DNS server does not save the domain name comparison relationship, but only responds to the received DNS query request. Then, the local DNS server recursively queries the DNS server with the domain name record in the DNS hierarchical system, and returns the resolution result of the DNS server with the record)
    • 3,Root DNS serverReceived domain namewww.newsTV.comThe discovery top-level domain name is.comTo return to.comtheTop-level domain name serverThe IP address of theLocal DNS Server
    • 4. After receiving the IP address of the root DNS server, the local DNS server sends DNS resolution requests to the TOP-LEVEL DNS server.
    • 5,Top-level DNS serverThe secondary domain name was found to benewsTVReturn to register this top-level domain name.comUnder theAuthoritative DNS server (secondary domain name.newsTVDNS server)To the local DNS server.
    • 6.Local DNS ServerTo receiveTop-level domain name serverThe return of theAuthoritative DNS ServerIP address, toAuthoritative DNS ServerInitiate a DNS query request,Authoritative DNS ServerHave the domain namewww.newsTV.comReturn all information aboutwww.newsTV.comThe IP address of theLocal DNS Server
    • 7,Local DNS ServerAccept toAuthoritative DNS ServerReturns the IP address toThe DNS client.The DNS clientFinally, the IP address is returned to the browser.

14, TCP connection three handshake and four wave reasons and their process

  • Cause: THE TCP connection enables reliable data transmission. To ensure reliable data transmission, ensure that the server and client receive and receive data properly. Therefore, you need to shake hands three times and wave hands four times.
  • The TCP header fields are described as follows: The TCP header is added to the front of the Data transmitted by TCP. The header fields are used for TCP transmission. The part above Data in the following figure is the TCP header.
    • Source Port: indicates the Port number of the sender
    • Destination Port: indicates the Port number of the receiver
    • Sequence Number: We call it the subcontract Number and it does two things.
      • 1. If the TCP packet data is too large, we need to subcontract it. In order to ensure that the recipient receives these segmented packets and reintegrates them, we need to provide the serial number to assist the recipient to integrate them.
      • 2. Confirm that the packet has been successfully received with the Acknowledge Number returned by the recipient
    • Acknowledge Number: We call it a response Number, used with a Sequence Number.
      • For example, when the server receives a TCP packet from the client and finds that its Sequence Number is 10, the server replies with an Acknowledge Number of Sequence Number + 1 (11). If the Sequence Number of the TCP packet sent by the server is 11, the client knows that the server received the packet whose Sequence Number is 10.
    • Code: Control Flag, it has 6 bits, representing 6 handles, can appear more than one at a time, the following are some important handles.
      • ACK (Acknowledge) :ACK=1Indicates that the sent data is correctly received.
      • SYN (Synchronous)SYN=1Indicates that you want to establish a synchronous connection, that is, to request a connection.
      • FIN (Finish) :FIN=1It indicates that the transmission is completed and the request for disconnection is entered. At this time, the request for disconnection is waiting for the response of the other party (the connection is not disconnection).
  • Three handshakes:Prevents invalid request connections from being transmitted to the server causing errors. For example, the first request sent by the client is not lost but stuck somewhere, so that it is delayed until the release of the connection. At this time, the server thinks that the client creates a new connection request and returns a response message, waiting for the client to send data, so the server is waiting for the waste of server resources.
    • 1, the client sends the TCP header (just a confirmation process, no formal data interaction, so there is no need to send the content)
      • The packet header controls the identifierSYN=1Indicates that you want to establish a connection
      • The packet headerSequence Number = x(If the server accepts the packet, the Acknowledge Number must be x+1.)
    • 2. The server receives the TCP header sent by the client, indicating that the server’s acceptance capacity is normal and the server sends a response to the TCP header.
      • The packet header controls the identifierACK=1Indicates that the sent data is correctly received
      • The packet headerAcknowledge Number = x+1It indicates that the TCP packet sent by the client is correctly received
      • The packet header controls the identifierSYN=1Indicates that the server also expects to establish a connection with the client
      • The packet headerSequence Number=yThe server sent back a TCP packet with the y Acknowledge Number (y+1).
    • 3. The client receives the TCP header of the response from the server and continues to send the TCP header to the server (to inform the server that the sending ability is also normal) after knowing that its sending and receiving ability is normal.
      • The packet headerACK=1Indicates that the sent data is correctly received
      • The packet headerAcknowledge Number = y+1The response packet from the server was received successfully
    • 4. The server receives the packet returned by the clientAcknowledge Number = y+1The three-way handshake ends when you know your ability to send is normal.
  • Wave four times: Ensure that when the client needs to end the connection, the server can continue to send data if the server has not finished sending data. Otherwise, the client says that the server has not finished sending data, so the data that the client should receive will be lost.
    • 1. The client sends a TCP header expecting to disconnect
      • The packet header controls the identifierFIN=1Indicates that the client wishes to disconnect
      • The packet headerSequence Number = x, indicating that the current client expects the packet sequence Number to be X (if the server receives the packet, the TCP header Acknoledge Number in response must be X +1)
    • 2. The server receives the TCP packet requesting disconnection from the client and returns the response header, knowing that the client expects to disconnection from the current connection
      • The packet headerACK=1Indicates that the sent data is correctly received
      • The packet headerAcknowledge Number = x+1The disconnection request packet sent by the client was received correctly
    • 3. The client knows which side of the server knows that it expects to disconnect and waits for the server to tell it that it can disconnect. During this period, the server will continue to send data to the client if any
    • 4. When the server realizes that it has no more data to send to the client and can disconnect, it sends the disconnection TCP header to the client
      • The packet headerFIN = 1Indicates that the server is done and can be disconnected
      • The packet headerSequence Number = yIndicates that the server currently expects the sequence Number of the disconnected packet to be Y (if the client received the packet, the TCP header must be y+1)
    • 5, the client receives the server can be disconnected from the bag, know the server side processing finished, everybody can come loose, so we can spread to the server sends a TCP packet to inform the server can be dispersed, why to want to have this step, because I think if the client wants to send data again, can continue to send, Otherwise you have to shake hands three more times after disconnecting)
      • The packet headerACK=1Indicates that the sent data is correctly received
      • The packet headerAcknowledge Number = y+1It indicates that the disconnection packet sent by the server has been received correctly.
    • At this point, the client server completes the connection and disconnects, ending with four waves.

19. CDN, CDN process and CDN cache

  • CDN: Content Delivery Network (CDN). In short, it is to inform users of the address of the cache server with the fastest access speed and the nearest access speed of the requested resource through the server scheduling of the central platform for user requests. Users request the server to obtain resources. This solution solves problems such as source server service overload and network congestion (a large number of TCP connections) and slow data transmission.
  • CDN process:
    • 1. Browser accesswww.cdnNews.com/index.html
    • 2, the browser uses the local DNS service to resolve the address (DNS resolution process)
    • 3. The AUTHORITATIVE DNS server address is returned to the local DNS server
    • 4. Since the address accessed is processed by CDN, the authoritative DNS server is actually the CDN dedicated DNS server of the website. The server parses the requested resources through the load balancing system and returns the IP address of the cache server with the fastest response to users
    • 5. The local DNS server receives the IP address returned by the CDN dedicated DNS server to the browser. (DNS resolution is complete)
    • 6. The browser sends a request to the IP address.
    • 7. After receiving the request from the browser, the cache server checks whether the resource is available locally and directly returns the resource to the browser.
    • 8. If there is no such resource, the cache server requests resources from the source server, obtains the resource cache, and returns the resources to the browser
  • CDN cache:
    • 1. The browser sends a data request to the CDN cache server
    • 2. CDN cache server first checks whether local data has expired and does not directly return modified data
    • 3. The CDN cache server finds that the local cache has expired, and requests data from the source server, obtains the requested data, locally caches the data, and returns the data to the browser.

20. HTTP long connections

  • Long Connection: A TCP connection can send multiple HTTP replies. After an HTTP session, the TCP connection is not closed immediately. By default, the TCP connection is kept for a period of time. The next browser can continue to use the TCP connection.
    • Set long connection: http1.1 default long connection, request header defaultConnection: keep-alive
  • Advantages of long connection:
    • Reduce congestion (establishing a large number of TCP connections)
    • Reduced latency for subsequent requests (no handshake required)
  • How to disconnect long connections:
    • Server response header Settingskeep-alive: timeout = x: The client holds this connection for x seconds and is disconnected due to timeout.
    • Server response SettingsConnection closeTell the client that the connection is down, and the client destroys the connection after receiving it.

21. What is HTTP protocol and HTTP workflow

  • HTTP: HTTP Hypertext Transfer Protocol (HTTP Hypertext Transfer Protocol) is an application-layer protocol that defines rules for communication between browsers and servers.
  • HTTP workflow:
    • The browser sent an HTTP request packet
    • Establishing a TCP connection (three-way handshake)
    • Transmitting HTTP requests
    • The server responds to the HTTP request
    • Release THE TCP connection (four waves)

21, HTTP1.1 features

  • Connection multiplexing (long connection) : Request headers are added by defaultConnection: keep-aliveAfter that, the client and server maintain a TCP connection, which can be reused for subsequent requests
  • HTTP pipework: Multiple HTTP requests can be sent together on a long connection basis, but the server still needs to process the response one by one, which will cause wire header blocking. If a batch request for one request and a response takes a long time to block, other requests and responses will also be blocked. This will be optimized for multiplexing in HTTP2.0
  • The returned message supports the Host field: The server may be configured with multiple domain names for one IP address (multiple virtual hosts on the server correspond to multiple domain names). This field supports the provision of services for different domain names under the same IP address. The 400 status code is reported if the Host is missing
    • For example, the SERVER IP address128.1.2.3Configure two domain nameswww.a.comwithwww.b.com
    • We’re throughwww.b.comThe IP address resolved during the access is128.1.2.3
    • The request is sent to the server based onHost: www.b.comSelect the corresponding virtual host to process the request
  • New Cache policy fields: cache-control, if-none-match /Etag, if-unmodified-since, and if-match.
    • If-unmodified-since: As opposed to if-modified-since, tells the server that the file has not been updated to process the request within that time, otherwise returns status code 412 (preprocessing failed)
    • If-match: indicates a conditional request. For head and GET requests, the value of this field is Etag. The server returns the resource only when the Etag value is met.
      • For the GET /head method, range is used to make a range request to determine that the new request is the same resource as the old one. If Etag cannot match on failure, return 416 (scope request not met)
  • 24 error status codes are added: for example, 401 request resource conflicts with current resource status, 410 server resource is permanently deleted
  • Introduction of the header field Range: allows partial resources to be requested and returns 206 (partial resources) so that resumable breakpoints can be performed.

22, cookie

  • Cookie: The cookie is a text message sent by the server to the client. The client saves the text message and sends the text message to the server the next time it accesses the server.
  • Cookie purpose: HTTP protocol is stateless, the first request and the second request server does not know that the request is sent by the same client, but if the second request carries the cookie returned by the first request server, then the server knows that the two requests are from the same client. (For example, the second login of a website does not require manual login, but directly carries the cookie and sends it to the server, which verifies that the cookie information is correct)
  • Cookie feature
    • Request auto-carry cookie: In short, cookies are automatically carried along with requests made by the browser
    • Non-cross-domain: This feature is maintained by the browser, which determines which domain to carry cookies based on the requested address.
      • Secondary domain name issues: for examplea.comwithb.a.comIf the desired level 1 domain name A and level 2 domain name B can also be carrieda.comCookie, can be setcookie.SetDomain('.a.com')So that thea.comThe cookie can be used by the second level domain under
      • Path problems: for examplewww.a.com/filePath access is not portablewww.a.comCookie if we expectwww.a.com/fileCan carrywww.a.comCookie, can be setcookie.setPath(/file/)If thewww.a.com+ If any path can carry the cookie, set this parametercookie.setPath(/)
  • Server set cookie:

ctx.set('Set-cookie', 'name=value; path=/user; domain=localhost; max-age=30; HttpOnly; SameSite=Strict')

  • Cookie properties:
    • Path&domain: specifies the domain name domain+ path. The browser carries cookies only when the URL is the same as the specified URL.
    • Expires: The expires server specifies the cookie expiration timestamp. The expires browser will not save the cookie.
    • Max-age: max-age specifies the number of seconds for the cookie to survive. The browser will not save the cookie after expiration. When present with Expires, use max-age. If neither occurs, the cookie exists only for this session and the browser window is closed, the browser will no longer save the cookie
    • HttpOnly: If the server specifies this field, the browser cannot use itdocument.cookieRead the cookie.
    • Secure: This attribute is a flag and has no value. The cookie that contains secure is carried by the browser only when HTTPS or other security protocols are used. If you want to set this field on the browser, ensure that the web page uses HTTPS.
    • SameSite: This field has two values strict and lax. The default is strict.
      • Strict: Cookies are not allowed to be sent across sites. Only the current website is allowed to send cookies.
      • Lax: Cross-site request portability is allowed, but only GET, and no cookie if post.
  • Browser operation cookie:
    • Read the cookies:var cookie_ = document.cookie
    • New cookies:document.cookie = 'name1 = value1; path:/; domain:localhost; max-age=30'
    • Replace cookie: You can replace the original cookie by creating a cookie with the same name.document.cookie = 'name1 = value2; path:/; domain:localhost; max-age=30'
    • Delete cookie: delete name3 cookie:document.cookie = 'name3=value3; max-age=0'If the max-age of the cookie is set to 0, the cookie can be deleted. If the max-age is set to <0, the cookie is only stored in the current session, and the browser window closes and the cookie disappears.
  • Cookies prevent XSS and CRSF
    • Cookie Set HttpOnly, forbids the client to use document.cookie to access cookie, prevent XSS attack to obtain cookie.
    • Cookie Set sameSite: strict. All third-party websites are prohibited from accessing the server with cookies to prevent CRSF attacks.

Cookie and token are both placed in the request header. Why can only the former be hijacked?

Because the browser will automatically bring cookies when sending requests to the server, CRSF officially takes advantage of this to impersonate the logers. However, CRSF attacks rely on the cookie mechanism to access the automatic carry of the server, but cannot obtain user cookies. However, even if the token exists in the cookie, it is automatically carried and sent to the server. The server verifies the token in the request header rather than the cookie, so the authentication cannot pass as long as the token is not set. The token requires the client to set the request to send to the client, which CRSF cannot do. So you can’t hijack tokens.

25, the session

  • Session: A block of data stored on the server to record client information. Like cookies, this is to address the stateless nature of HTTP.
  • Session generation process:
    • 1. When the client accesses the server for the first time, the server generates a session and a sessionID for the client. The session saves the client information, and the sessionID identifies the location of the session and returns the sessionID to the client (generally stored in cookies).
    • 2. The next time the client accesses the server, it carries a cookie
    • 3. The server receives another visit from the client, obtains the sessionID in the cookie, searches for the corresponding session in the storage, and obtains the current client information.
  • Different from cookies:
    • Session implementation depends on cookie, sessionID stored in cookie (disable can use URL rewrite to complete, put in URL)
    • Cookies are stored on the client and sessions are stored on the server
    • Cookies are generally 4kb in size, and sessions have no size limit

26, webStorage (localStorage and sessionStorage)

  • LocalStorage: stores shared data
    • LocalStorage features:
      • Homologous pages can be accessed and used, even if they are not in the same browser window
      • Non – human to delete is always saved
      • Save it in a browser with a size of 5M
  • SessionStorage: stores shared data
    • SessionStorage features:
      • The same page and in the same window can be shared access
      • The duration is during the current session. The window is deleted after the session ends, that is, the browser opens the window until the window closes
      • Save it in a browser with a size of 5M
  • WebStorage operation method (sessionStorage and localStorage operation method is the same)
    • Xxxstorage. setItem(key,value) : stores data in the key-value format
    • Xxxstorage. getItem(key) : obtain data: obtain the value of the key
    • Xxxstorage. removeItem(key) : deletes the specified key
    • Xxxstorage.clear () : deletes all data

27. Brief description of the V8 engine

V8 engine is a VIRTUAL machine specially used to process JS scripts. It is the JS processing engine used by Google Browser and Node.js. JS as an interpreted language, and then compile the program is running, processing into intermediate code are then passed on to the interpreter of the middle processing run into machine language, the efficiency is lower, V8 with the mixed compiling execution, inline cache interpreted two ways and methods to improve performance, with these, JS in V8 engine speed as binaries.

  • Initialization of the execution environment, such as stack, execution context, built-in functions, etc
  • V8 structure source code parsing and generate AST (Abstract syntax tree)
  • V8 also generates scope, bytecode, and the interpreter can execute bytecode directly
  • 4. If the interpreter finds that a code has been executed several times during bytecode execution, it will be thrown to the compiler to compile it into binary code for execution to improve execution speed.

28. Briefly describe PWA

  • PWA: Progressive Web App. In fact, it uses Web technology to write a Web application and adds App Manifest and Service worker to realize the installation and offline functions of PWA.
  • Advantages:
    • The essence is a web page, no native APP startup conditions, quick response to start
    • Implement push notification
    • Offline cache, even if the phone does not have Internet can also use some offline functions
  • Manifest: manifest.json specifies how to add PWA to the home screen details such as chart background color, size, etc
    • The HTML header introduces manifest.json:<link rel="menifest" href="manifest.json"/>
  • Service Worker: A service worker is like an interceptor between a web page (a PWA is essentially a web application) and a server, intercepting HTTP requests coming in and out to control the current web page.
    • Features:
      • After the successful installation of the registration, run the browser background, not affected by page refresh, can intercept all HTTP requests within the scope of the page
      • Web sites must use HTTPS (localhost exception)
      • Separate running environment and execution thread
      • You can’t manipulate the page DOM, but you can handle it event-driven.

29, encoding Specifies the header encoding

  • ASCII: establish the encoding of the correspondence between English and binary values (ASCII one byte is 8 bits binary can be represented)
  • Unicode: Encoding of all characters in relation to binary values
  • Utf-8: An implementation of Unicode encoding (usually 1-4 bytes, compatible with ASCII encoding)
  • GBK/GB2312/GB18030: GB2312 and GBK are for simplified character encoding, GB2312 supports more than 6,000 Chinese characters, GBK supports more than 10,000 Chinese characters, GB18030 is for traditional character encoding (Chinese characters are usually stored in two bytes)

30, ASCII, Unicode, UTF-8 distinction

  • ASCII: ASCII uses one (byte) byte to represent letters, numbers, symbols, such as charactersAThe corresponding byte is represented as0100, 0001,, the encoding table is ASCII encoding.Simply put, ASCII is the representation of characters (mainly English) and numeric encoding tables in a single byte.
  • Unicode: There are too many characters in the world to be represented by ASCII alone, so in order toUnified all text encoding, Unicode encoding method appeared. Unicode encodings use two bytes to represent a character, such as charactersARepresented in Unicode as00000000, 01000001,.
  • Utf-8:It can represent any character in the Unicode encoding for the Unicode variable character length encoding rule. Use Unicode encoding from pairsAThe first eight bits are all zeros, so utF-8 is created for this: it has two representations.
    • Utf-8 represents single-byte characters (such as all characters represented by ASCII encoding) : Occupies one byte space, the same as the ASCII representation.
    • Utf-8 stands for multi-byte characters (characters >=2) : For n-byte characters, the first n bits of the first byte are all 1s, the n+1 bits are 0, the first two bits after the first byte (excluding the first byte) are all 10, and the remaining unmentioned bits place Unicode codes.
      • Such asyanThe Unicode encoding for01001110, 00100101,Utf-8 representation is known from the Unicode utF-8 mapping tableyanThree bytes are required (the comparison table and comparison rules are not expanded here), then utF-8 encoding is converted to utF-8 in the manner that utF-8 represents multi-byte characters:
        • 1, the first three bits of the first stanza are all 1s, and the fourth bit is 0 (bold bit).
        • 2, the first two bytes of the second byte and the third byte are 10 (bold bits)
        • 3,11100100 10111000 10100101

31. How do TCP connections ensure data correctness

  • Check and mechanism: add 12 bytes pseudo header before sending TCP packet, and then sum all segments in 16 bits for each segment. The sum value is placed in the TCP header Checksum field, and the receiver receives TCP packet data to perform the same operation. The final sum value is all 1, indicating that the data is correct.

    • TCP20 byte table header
    • Tcp20-byte table header + 12-byte dummy header

    • TCP validation and (UDP as well) require a 12-byte pseudo-header (above) on top of the original TCP packet before calculation. The TCP is then divided into segments of 16 bits (2 bytes)
    • Then all segments (plus the pseudo-header TCP packet is divided into many segments, each 16-bit segment) are summed by inverse code, and the result is placed in the checksum field
      • Note: In the TCP packet, the sender sets the Checksum of the TCP table header to 0, and then sums the Checksum. The receiver does not set the Checksum of the TCP table header to 0
    • The receiver computes the data in the same way. If all bits of the final checksum are 1’s, the data is correct (all zeros are correct for UDP).
  • ACK=1+ sequence number mechanism ACK= x+1 mechanism:

    • Confirmation replyACK=1: Three handshakesControl identifier ACK(Acknowledge) :ACK=1Indicates that this is a TCP packet responding backSo only the header is needed to receive the TCP packetACK=1The TCP data sent before is received by the peer party
    • Serial number mechanismack=x+1: Introduced in three handshakes(Sequence Number: SEq = x) Acknowledge Number: ACK = x+1You can verify that the message was received correctly, for example in this case the serial number X packet was received correctly so returnack=x+1(ACK=1 is the prerequisite for ACK= x+1)
  • Disconnection mechanism: Three-way handshake and four-way wave to ensure complete data transmission and normal transmission

  • Retransmission mechanism: Timeout retransmission mechanism (retransmission timer), fast retransmission mechanism, selective retransmission mechanism, redundant selective retransmission mechanism

  • Traffic control: The sender controls the amount of data to be sent based on the actual receiving capability of the receiver

  • Congestion mechanism: When the network is congested, data is reduced to be sent. The sender has a congestion window. Before sending data, the timing window is smaller than that sent by the receiver.

32. TCP four timers

  • Retransmission timer (RTO) : the sender sends TCP packets to the receiver. If the timer expires and no TCP packets are received from the receiver, the sender resends the TCP packetsAlso restart the congestion mechanism (more on this later)
    • Application scenarios of the retransmission timer: The retransmission timer is used to cope with packet loss or blocking. Packet loss means that no packets are sent. Block means that the receiving end cannot process a large amount of received data and does not respond to the sending end for a long time
    • Retransmission timer process:
      • 1. The retransmission timer is enabled when the sender sends data to the receiver
      • 2. If the retransmission timer ends and no response data is received (packets are lost or blocked)
      • 3. The retransmission timer resets and resends the data
  • Stick timer: This timer is used in a sliding window and is received by the senderZero window notificationAfter that, the sender is guaranteed to receive the message returned by the receiverWindow update packet
    • Adhere to the specific process of timer:
      • 1, when the sender receives the zero Window notification returned by the receiver (return Window=0 in the TCP header)
      • 2. The sender knows that the buffer of the receiver is full, so it does not send data to the sender for the time being and waits for the receiver to send back the Window update packet (Window= > 0) after processing the buffer data
      • 3. After that, the receiver will send back the window update packet, so the sender continues to send data
      • 4. However, if the receiving end does not send back the window update packet, the sender can not wait, because the receiving end may send back the window update packet, but lost it, resulting in the sender does not receive the window update packet
      • 5. Therefore, the persistence timer will be started at the same time when the sender receives the zero window notification returned by the receiver
      • 6. When the persistence timer ends and no window update packet is received from the receiver, the sender will send window detection packet to the receiver to request window update packet, and reset the persistence timer
  • Keepalive timer: The client establishes a TCP connection with the server. If the server does not receive the data sent by the client for a long time, the keepalive timer is set toKeepalive timerWhen the timer ends, the server continuously sends probe packets to the client to determine whether the client is faulty. If no response is received from the client, the server disconnects the current TCP connection to save resources.
    • Tips: The keepalive timer will reset the data received from the client every time, which usually lasts for 2 hours. If the keepalive timer ends, probe packets will be sent (consecutive 10 times, 75s each time interval).
  • Time waiting timer: In the application scenario, the timer is started when the server receives the last wave, which is shown in packet segment 7 in the following figure.
    • Specific process of time waiting timer: Third handshake response to the client receives the server for the fourth time when shaking hands, opens 2 MSL timer, wait for MSL 2 time (1 MSL message segment the largest survival time), if not to receive the service side the third handshake package that can be disconnected, if again receive the server for the third time shaking hands, show the client the fourth handshake data loss, The server can not receive the confirmation, then service to resend the third handshake, the client receives the third handshake reset again 2 MSL timer, retransmitting the fourth handshake data, so wait for MSL 2 is to prevent the fourth handshake data lost, cause the client disconnects, the service side didn’t disconnect, has been the third time resend a handshake.

33. Sliding window protocol

  • Sliding window protocol: Sliding window protocol is usually used to implement flow control and error control to ensure the reliability and validity of data transmission. I personally understand that sliding Window protocol is the combination of flow control and error control. Flow control controls flow by confirming the Window field in the message to control the Window size, and error control is realized by ARQ protocol
  • Flow control of sliding window protocol:
    • The serial number in the send window represents the data frames that have been sent but not yet confirmed. Once the window reaches its maximum value, the sender data link layer must forcibly close the network layer until a frame buffer is available
    • The receiver data link layer window corresponds to the frame that is allowed to accept, and any frame that falls outside the window is discarded
  • Error control of sliding window protocol: Continuous ARQ protocol is used for error control. When both sender window and receiver window are 1, error control degenerates into stop equality ARQ protocol

34. ARQ protocol (stop waiting ARQ protocol, continuous ARQ protocol)

  • ARQ: The Auto Repeat Request (ARQ) protocol is one of the error correction protocols at the data link layer. Note that ARQ is neither a transport layer protocol nor a TCP protocol, so the data unit of ARQ is frame. ARP and TCP at the transport layer jointly maintain data reliability. ARQ protocols are divided into two types: Stop and Wait ARQ protocol and continuous ARQ protocol, wherein continuous ARQ protocol is divided into ** Go back N ARQ protocol and Selective repeat ARQ protocol
  • Stop equality ARQ protocol: The sending window and receiving window of stop equality ARQ protocol are both 1. The sender must wait for the receiver to confirm and return the data after processing each frame of data before sending the next frame of data. This mode has advantages of simple operation, small memory usage, and low channel utilization. The following are four cases of stop equality ARQ protocol
    • Error free: Send data frame, wait for data frame confirmation, after confirmation, continue to send the next data frame
    • Timeout retransmission: When the sending frame is lost, the receiver cannot receive the frame and cannot return the confirmation frame of the frame. When the retransmission timer expires and the sender does not receive the frame, the sender resends the frame
    • Confirmation frame loss: The sender sends a data frame, and the receiver returns confirmation frame loss. When the sender retransmits the data frame after the retransmission timer expires, the receiver receives the duplicate data frame, discards the duplicate data frame (because the current window has received the frame), and returns the confirmation frame.
    • Confirm frame late: The sender sends the data frame, the receiver returns the frame confirm frame, for some reason, confirm the frame has not returned, the sender retransmission timer timeout haven’t received confirmation frame, then resend the frame and the receiver receives the frame, discarded, continue to return the frame to confirm frame, the receiver to receive the confirmation frame continue to send the next frame, the frame will be the first to send the frame to confirm back, Sender ignored.
  • Continuous ARQ protocol: Continuous ARQ protocol is divided intoRoll back the N frame ARQ protocolwithSelect the ARQ protocol for retransmission, continuous ARQ protocol isSliding window techniquewithRequest retransmission technologyThe combination of the window size is large enough that frames can flow continuously on the line, hence the continuous ARQ protocol.Generally, TCP relies on the back N frame ARQ protocol to achieve transmission reliability. If both the receiver and sender use SACK (selective confirmation), TCP uses selective retransmission ARQ protocol to ensure reliable data transmission. SACK is the implementation of selective retransmission
    • Back N frame ARQ protocol: Back N frame ARQ protocol in the sender window is greater than 1, the receiver window is equal to 1, so allow the sender to send multiple data in a row, because the receiver window is 1, so the receiver can handle only one at a time, there was an error once the data, such as the retransmission timer timeout, the sender will resend the frame and the frame at the back of the data, because the receiver window is 1, An error, the back of the frame the receiver can’t remember, all discarded, so the choice retransmission ARQ protocol, so allow sent the frame to be confirmed, the more likely to return to the more retransmission of frames, in the case of data transmission channel state is very poor, back frame N ARQ protocol ARQ protocol high efficiency doesn’t stop by the equation.
    • Choose the retransmission ARQ protocol: select the retransmission ARQ protocol is greater than 1, the sender and the receiver window select the retransmission ARQ protocol, the sender only retransmission error frames, the receiver cache error frame after sending frames, receive the retransmission error frame cache frame before merger, compared with the back N, reduce the error frame after the right frame retransmission overhead
    • Mixed ARQ: In mixed ARQ, after the data packet is sent to the receiver, it will not be discarded even if an error occurs. The receiver instructs the sender to re-transmit the wrong packet or all the information, and merges the information of the second received packet with the information of the last received packet

35. Cumulative confirmation mechanism

  • Cumulative confirmation mechanism: it is not necessary to confirm the data successively sent by the sender one by one, but to confirm the data successively arrived at the last data, which means that all data have been received before this data. (Non-sequential arrival or packet loss causes the receiver to send non-sequential ACK packets. Three consecutive redundant ACKS trigger the sender’s fast retransmission mechanism.)
  • Advantages of the cumulative acknowledgement mechanism: Saves resources (multiple packets return only one ACKNOWLEDGEMENT ACK), and has high robustness. Even if the acknowledgement is lost, there is no need to retransmit
  • Cumulative acknowledgement disadvantage: inability to respond to the sender in a timely manner with received data (because the receiver waits for the last data to arrive in sequence before returning an acknowledgement ACK)
  • About incidental confirmation: I think incidental confirmation combines the ack to be returned and the packet to be returned into a data return, so that the other party can use this data to confirm that all the data sent before are accepted, and also receive the data sent by the other party.

36. TCP fast retransmission

  • Fast retransmission and fast recovery algorithms: when the receiver receives the message of disorder (which may be the middle segments missing or delay), the receiving party immediately return a redundant ACK to the sender, let each other know you received a message segment disorder, and tell each other about he expected to receive a serial number, or the loss or delay in the middle of the message. If the redundant ACK appear three times or more than 3 times in a row, then lost, is likely to be the middle segments is not cause delay, so the sender don’t have to wait for the middle period of the timer timeout message can immediately retransmission passage to the paper, which is fast retransmission, fast retransmission executed after congestion avoidance of congestion control algorithm
  • Redundant ACKS: The acknowledgement ACKS returned by the recipient after receiving an out-of-order packet are redundant ACKS
  • Fast retransmission process (as shown in figure, packet loss occurs once) :For convenience, seq=x packets return ack=x+1. Normally, the sender randomly selects sequence number X (seq=xAck = x+1000+1; ack = x+1000+1.

    • 1. The sender continuously sends TCP packetsSeq = 1, 2, 3, 4, 5(serial numbers are 1,2,3,4,5 respectively), the receiver normally receives packets in the order of 1,2,3,4,5, but the two packets are lost here.
    • 2. The receiving end receives itseq=1Package returnedack=2Confirm package, indicatingseq=1Packet received, expect next transmission of packet with serial number 2
    • 3,seq=2Bag missing. BackSeq = three, four, fivePackets were received but out of order, soSeq = three, four, fivePackage is returnedack=2Confirm the package
    • 4. The sending end receives three consecutive packetsack=2Confirm the package,These three packets are redundant ACKS, the sender knowsseq=2The bag was lost and reissuedseq=2package
    • 5. The receiving end receives the packetseq=2Package, now the receiver buffer also hasSeq = three, four, fiveThree bags, so the next bag should beseq=6And return toack=6Confirm the packet to the sender. (Note that ack=6 is returned indicating that the data link layer is using the ARQ protocol, which is cachedSeq = three, four, fivePackage, so return ACK =6)
  • Why does it take 3 redundant ACKS for fast retransmission to occur?
    • First of all, even if the sender sends TCP packets in sequence, TCP packets are encapsulated in IP packets. IP packets are transmitted in disorder, which means that TCP packets arrive at the receiver in disorder. Even if packet loss does not occur, the disorder will lead to the sending of redundant ACKS, with a probability of three redundant ACKS
    • Of course, if packet loss occurs on an out-of-order basis, there must be three redundant ACKS
    • Therefore, three redundant ACKS are used as the fast retransmission threshold to estimate the out-of-order packet loss and non-out-of-order packet loss

SACK: Selective Ackownledgement

  • SACK: When TCP transmits data, if a packet is lost in the middle, the sender will retransmit the packet and the subsequent packets of the packet, even though the subsequent packets may have been received by the other party (corresponding to the back-n-frame ARQ protocol). SACK (selective confirmation) is developed to improve this situation. That is, when the receiving end receives an exception, the TCP table header of the acknowledgement packet carries the SACK field, which is used to mark which TCP packets have been stored in the buffer. When the sending end retransmits data, it knows which packets are lost according to the SACK field, and only retransmits the lost packets.
  • SACK option: The SACK information is provided in the options section of the TCP table header. The SACK information is divided into two types: one indicates whether the TCP table supports SACK and the other indicates specific SACK information
    • Identifies whether SACK is supported or not: This option is only allowed in SYN packets, that is, the two packets before the TCP handshake, which identify whether SACK is supported or not
    • Identifies specific SACK information: This parameter tells the sender the discontinuous data blocks that the receiver has received and cached. Based on these data blocks, the sender checks which block is missing and sends the corresponding data
      • Left Edge of 1st Block: The first data sequence number of a discontinuous Block
      • Right Edge of 1st Block: The last data sequence number of a discontinuous Block
      • The data between Left EDge-1 and Right Age is not received, and the boundary value is actually no more than four groups

  • SACK processing by receiver in case of packet loss or delay:
    • If the sequence number of the received packet is smaller than the expected sequence number, it indicates that the packet is replicated or resend from the network and can be discarded
    • If the serial number of the packet received by the receiver is larger than the expected serial number, it indicates that the intermediate packet is lost or delayed. SACK packets are then sent to notify the sender (the first block in SACK must describe which data block fired the SACK option, and the receiver should fill in SACK information in the SACK option as much as possible (even if the space is limited). Keep sender up to date on current network transmission)
  • Sender’s response to SACK: The sender maintains a firing queue for unconfirmed data. The data cannot be released until confirmed. Every packet block in this firing queue has a flag/bit/(similar to true/false, the default is false), which identifies whether the current data is SACKed. When a SACK message is received from the receiver, send the SACKed bit of every data block in the unconfirmed queue according to the SACK message identifier (if SACKed, this becomes true). All data that is not/identified is resfired (that is, all data that is/is false).
  • Tips: if SACK is required, both parties need to support SACK. Linux2.4 enables this function by default. When SACK is enabled, TCP protocol uses selective retransmission ARQ protocol of data link layer to ensure data transmission reliability

38. D-sack: Duplicate Selective Acknowledgement mechanism

  • D – the SACK: D-sack is an extension of SACK. The description of SACK in D-sack is in addition to the normal description of SACK information before (which data has not been received and which data has been received), and the description of repeated data information is added. The repeated received data is described in the first SACK data. Other SACK data describe normal SACK information. In this way, the sender can judge the current network transmission situation more carefully.

  • The following figure shows the description of SACK information. Because the acknowledgement ACK of 3000-3499 data is lost for the first time, the sender resends 3000-3499 data, thinking that packet loss timeout occurs. After receiving 3000-3499 data, the receiver returns SACK information. ACK=4000 tells the sender to expect the next transmission of data with serial number 4000

39, TCP flow control

  • Traffic control: The sending end controls the amount of data on the sending end based on the receiving ability of the receiving end, preventing data loss due to the sending rate being too fast for the receiving end to receive data. The specific implementation of TCP flow control is the sliding Window mechanism. The sliding Window uses the Window field for flow control and the ARQ protocol for data error control

  • Window field: That is, the Window field in the TCP table header, which represents the remaining processing space of the buffer of the current receiver. The Window field is used to inform the sender of the amount of data that the current receiver can process. For example, in the header of the ack returned by the receiver, Window=3000 means that 3000 bytes of data can be processed. Then the sender receives the acknowledgement ACK, assuming that each TCP packet is 1000 bytes, which means that the sender can send three TCP packets to the receiver this time.

  • Flow control principle:

    • Control the sending rate of the sender by acknowledging the window field in the packet
    • The size of the sending window of the sender cannot exceed the size of the window in the confirmation packet
    • Confirm that the window size in the packet can be adjusted to change the sending rate of the sender
    • When the sending window of the sending end is full of data to be confirmed, the sending end stops sending data
    • When the window size of the received confirmation packet is 0, the data is stopped and the response window is greater than 0
  • Specific process of sliding window:

  • Deadlock problems:

    • That is, the receiver sends a zero Window notification to the sender, that is, the sender stops sending data when the packet Window is 0.
    • After processing buffer data, the receiver sends window update packets to the sender and notifies the sender to continue sending data. However, the packet is lost and the sender cannot receive window update packets, resulting in deadlock.
    • The sender does not send data and waits for window update packets
    • The receiver waits for the sender to send data
  • Deadlock resolution:

    • When the sender receives a zero window notification, the duration timer is enabled
    • If no window update packet is received at the end of the duration timer, a window probe packet is sent to the receiver
    • The receiving end receives the window detection packet. The receiving end receives the confirmation packet containing the current window value.
    • Resets the duration timer after confirming that the window in the packet is still 0
    • Confirm that the message window is greater than 0, unlock the deadlock, and send data

40. TCP congestion control

  • Congestion causes:The traffic on a link of a packet switching device (such as a router) exceeds the transmission capability of the link. As a result, the number of packets in the output queue increases and the output queue overflows, resulting in packet loss. In short, requests for resources exceed the capacity of store-and-forward nodes.
    • Insufficient storage capacity of the packet switching device: Several data flows to the output port of the unified packet switching device. If the device does not have sufficient storage space, data packets are discarded.
    • Insufficient bandwidth: High-speed data flows enter low-speed data links. Low-speed data links cannot handle high-speed data, resulting in massive data accumulation and congestion.
    • Insufficient processing capability of the packet switching device: For example, the data input speed of the router is higher than the processing speed of the packet switching device or the processing speed of the packet switching device is higher than the output speed of the output port.
  • Differences between congestion control and flow control:
    • Traffic control is aimed at point-to-point communication. It applies to two hosts and controls the data transmission rate between the two hosts through the sliding window mechanism
    • Congestion control is aimed at end-to-end communication, involving all host routers and other packet switching devices. Congestion control is a global control, which is the result of joint efforts to solve network congestion
  • How to determine network congestion: If packets are lost, network congestion is determined. When packets are retransmitted, the sender controls the amount of traffic sent by the sender
  • Congestion control: The sender to the receiver to be able to find out the current network load flow, the flow rate using the sender maintenance of congestion window (CWND), according to actual can send traffic window (SWND) as the congestion window and return to confirm message window flow control in the receiving end (RWND) in small, namely real send window value = MIN (congestion window, Receiver announcement window field value), i.e. SWND = MIN(CWND,RWND)
  • There are four solutions to congestion control: slow start, congestion avoidance, and fast retransmission
  • Congestion control process:
    • Use slow start first,
    • When SSthRESH (slow start threshold, or slow start threshold, which is also maintained by the sender) is reached, the congestion avoidance algorithm is used,
    • When a timeout retransmission or fast retransmission (typically due to packet loss, also could make the order, but here the timeout retransmission or fast retransmission is deemed lost package), to adjust the slow start threshold (ssthresh) and the value of the congestion window (CWND) if it is happened is fast retransmission is next to enable quick recovery, If a timeout retransmission occurs then slow start is restarted,
  • Slow start process:
    • Initialization: The sender maintains the field Congestion window (CWND) field, assuming that CWND starts with the value of 1 maximum packet segment (MSS), that is, CWND starts with the value of 1 packet. The threshold for slow start ssthresh is assumed to be 16, that is, 16 packets. The actual sending window value = MIN (congestion window, the field value of the notification window on the receiver), that is, the actual sending window value = congestion window
    • 1, send a packet first (because the congestion window (CWND) starts with 1)
    • 2. Congestion window (CWND) = Congestion window (CWND) + 1 = 1 + 1 = 2 when receiving 1 packet confirmation, send 2 packets
    • 3. Congestion window (CWND) = Congestion window (CWND) + 2 = 2 + 2 = 4 when receiving 2 packets confirmation, 4 packets are sent
    • 4. Congestion window (CWND) = Congestion window (CWND) + 4 = 4 + 4 = 8 when receiving 4 packets confirmation, 8 packets are sent
    • It can be seen that after each data transmission round, the congestion window doubles (1,2,4,8, exponential increase).
    • 5. When 8 packets are received, CWND = CWND + 8 = 8 + 8 =16 and 16 packets are sent. At this time, CWND >= slow start threshold SSTHRESH (SSTHRESH =16) and the congestion avoidance algorithm starts
  • Congestion avoidance process: When congestion avoidance begins, the congestion window (CWND) no longer grows exponentially, but linearly, after each data transfer processCWND = CWND + 1
    • 6. Congestion window (CWND) = Congestion window (CWND) + 1 = 16 + 1 = 17 when receiving 16 packets confirmation, 17 packets are sent
    • 7. Congestion window (CWND) = Congestion window (CWND) + 1 = 17 + 1 = 18 when receiving 17 packets confirmation, 18 packets are sent
    • 8. At this time, one or more of the 18 packets are confirmed to be returned with congestion, that is, timeout retransmission or fast retransmission occurs (this phenomenon is generally due to packet loss in nature)
  • Congestion avoids packet loss if it occursTimeout retransmission: We think the network is bad and we have to adjustSlow start threshold (SSTHRESH)Congestion Window (CWND)And re-enterSlow start
    • Slow Start threshold (SSTHRESH) = Congested Window (CWND) / 2
    • Congestion window (CWND) = 1
    • Re-enter slow start
  • Congestion avoids packet loss if it occursThe fast retransmission: Namely, three redundant ACKS are received. We think that the network is not so bad, and the processing method is adjustmentSlow start threshold (SSTHRESH)Congestion Window (CWND)The value of the
    • Slow Start threshold (SSTHRESH) = Congested Window (CWND) / 2
    • CWND = CWND /2
    • After adjusting the slow start threshold (SSthRESH) and congestion window (CWND) values, re-enter the congestion avoidance algorithm

41. What is the same-origin policy, what is cross-domain, cross-domain solution

  • Same origin policy: The same origin policy is a security policy of the browser. It prevents the scripts running in the browser from accessing resources from different sources (cookies, localstorage, and Ajax data request resources) without authorization. Different sources refer to different protocol domain names and port numbers.
  • Features of the same origin policy:
    • The same origin policy is a browser security policy. Non-browsers are exempt from this policy. For example, Postman,
    • Second, the same origin policy intercepts the data of the response to prevent contamination of the returned data
    • Form submission is not affected by the same origin policy, because the same origin policy primarily prevents scripts running in browsers from accessing resources from different sources without authorization
    • Img, link, and script tags are not affected by the same origin policy
      • <img src=xxx />
      • <link href=xxx />
      • <script src=xxx />
  • Behavior of same-origin policy restrictions:
    • The response to the request was intercepted
    • Non-homologous DOM cannot be obtained
    • Non-same-origin Cookie, Localstorage, IndexDB cannot be read
  • Cross-domain: Non-source resources attempt to communicate with each other but cannot because of the same origin policy
  • Cross-domain solutions:
    • CORS: Usually, the server can do the configuration. If cookies are required, the client must also configure them
    • JSONP: Cross-domain features using Script tags
    • Server forwarding: The same origin policy is browser restricted, we send the request to the same origin server, which forwards the proxy
    • PostMessage solves the cross-domain problem: H5 provides the window attribute postMessage, which can carry out cross-domain data transmission. Therefore, non-homologous IFrame tags are embedded in the current page, which can be used to carry out cross-domain data transmission

42. CORS cross-domain resource sharing

  • CORS: Cross-origin-resource-sharing, officially released in 2014 as an extension of HTTP protocol, defines the communication mode between clients and servers, allowing browsers to send XMLHttpRequest requests to cross-domain servers. However, the browser alone cannot achieve cross-domain access to resources, but also requires the cooperation of the server. Therefore, CORS is a mechanism for browsers and servers to cooperate with each other to achieve cross-domain access to resources
  • CORS implementation: When a cross-domain request occurs, the browser divides the request into simple requests and non-simple requests. At the same time, the request header field “Origin” is set, and the server set the response header field “Access-Control-Allow-Origin”. The browser uses this field and Origin to determine whether to intercept the response data. Note CORS is configured on the server, and the client is automatically configured by the browser
  • Simple request and non-simple request to determine the way: as long as meet the following two conditions is a simple request, on the contrary, non-simple request.As to why the difference between a simple and a simple request, because simple request is actually request Form when the Form is submitted, the Form itself is not affected by Form submission cross-domain restrictions, do not need to be a simple request of engineering operations, so backward compatibility to’s former request engineering judgement, it is divided into simple request with the simple, Of course, using scripts to mimic simple requests does not require prechecking
    • The request mode is GET, POST, or HEAD
    • The request header information does not exceed the following fields
      • Accept
      • Accept-Language
      • Content-Language
      • Last-Event-ID
      • Content-type (this field is limited to 3 values)
        • Text /plain: Data content is encoded in plain text
        • Multipart /form-data: Data content is form data
        • Application/X-www-form-urlencoded: Data content in key-value pairs (? Key1 = value1 & key2 = value2)
  • Simple request CORS processing:
    • 1. The browser will automatically add the Origin request header field when sending a simple request to inform the server of the source of the request
    • The access-Control-allow-Origin header field is set on the server side to tell the browser which Origin is allowed. If the browser receives these fields and finds that the current Origin has passed the access-Control-allow-Origin header, it does not intercept the returned data:
      // Use express to set up the server:
      * indicates any Origin. If a specific Origin is required, replace * with the corresponding Origin. Note that this field cannot be set to * if cookies are required
      app.all(The '*'.function (req, res, next) {
        res.setHeader('Access-Control-Allow-Origin'.The '*');
        next();
      });
      Copy the code
  • Non-simple request CORS processing:
    • 1. The browser first sends a pre-check request (OPTIONS request), which tells the server the request mode, request header field and Origin field of the non-simple request to be sent
    • Access-control-allow-origin = access-control-allow-methods = access-control-allow-methods = access-control-allow-origin = access-control-allow-methods = access-control-allow-methods = access-control-allow-methods Access-control-allow-headers (request Headers that Allow cross-domain resources) ‘response header field
      app.all(The '*'.function (req, res, next) {
        res.setHeader('Access-Control-Allow-Origin'.The '*');
        res.setHeader('Access-Control-Allow-Methods'.'GET, POST, DELETE, PUT');
        res.setHeader('Access-Control-Allow-Headers'.'X-Requested-With,content-type, Authorization,Authation');
        // res.setHeader("Content-Type", "application/json; charset=utf-8");
        next();
      });
      Copy the code
    • 3. After receiving the precheck request, the browser will compare whether the request mode, request header field, Origin field and the response header field returned by the server meet the requirements. If the requirements are not met, the request is not allowed to be sent; if the requirements are met, a non-simple request is sent to obtain server data

43, JSONP handles cross-domain

  • JSONP processing cross-domain principle: Script tags can be used to fetch data across domains. With this feature, we can create a new callback function on the client side, which is used to receive data from the server side, and create a script tag that is passed to the server side with the name of the callback function created by the client side. This tag returns the JS code executed by the callback function and passes in server-side data so that the client can retrieve server-side resources.
    • Client code
       <script>
          function getData(data) {
              console.log('JSONP data returned by server :', data);
          }
      </script>
      <script src="Http://127.0.0.1:3001/script? callbackName = getData "></script>
      Copy the code
    • Server code
      app.get('/script'.(req, res) = > {
        const fs = require('fs');
        // Get the JSONP callback function name
        const callbackName = req.query[Object.keys(req.query)[0]]
        // Return JSON data to the client
        const jsonData = JSON.stringify('this is jsonp data')
        // Create jsonp.js and write the callback function to execute the code passing in the argument: return JSON data to the client
        fs.writeFile(__dirname + "/public/jsonp.js".`${callbackName}(${jsonData}) `.function(err) {
            if(err) return res.send(err)
            // Return the jsonp.js file
            res.sendFile(__dirname+'/public/jsonp.js')}); })Copy the code
    • Contents of the jsonp.js file generated by the server
       getData("this is jsonp data")
      Copy the code
  • Tips:
    • JSONP is called JSONP because the JSONP method is called JSONP because the JSONP method is called JSONP because the JSONP method is called JSONP because the JSONP method is called JSONP because the JSONP method is called JSONP
    • Why can’t JSONP use POST requests: Because script tags can only make GET requests

44. Server forwarding solves cross-domain problems

The same-origin policy is browser security strategy, between the server and the server without the policy limit, so you can forward the local building server browser requests, in a nutshell the current browser cross-domain resource request local server, the local server will forward the request to the target server, access to the target server results back to the browser. (Of course, the local server also handles CORS, after all, the port is different.)

45, postMessage addresses cross-domain

  • PostMessage: postMessage is an API provided by H5 that can transfer data across domains in the following ways
    • Data transfer between pages and pages opened by pages
    • Page and page embedded iframe data passing
    • Data transfer between multiple Windows
  • PostMessage cross-domain solution: The current page uses postMessage to send data to the iframe page and listens for iframe returns messages on the current page
    • Current page:
      <iframe id="iframe" src="http://127.0.0.1:3001"></iframe>
      <script>
        // Add a listener to listen for data returned from non-same-origin IFrame pages
        window.onmessage = function (e) {
            // Determine that the current message is from the iframe page
            if (e.origin === "http://127.0.0.1:3001") console.log('demo1 :::', e.data);
        }
        // When the iframe is loaded, use postMessage to send a message to the iframe.
        iframe.onload = function () {
            iframe.contentWindow.postMessage('demo1 data '.'http://127.0.0.1:3001');
        };
      </script>
      Copy the code
    • Non-homologous IFrame page
       <script>
      // The iframe page listens to receive the sent data
      window.addEventListener('message'.function (e) {
          // Determine the data source and send data to the source using source
         if(e.origin === 'http://127.0.0.1:3000')  e.source.postMessage('Demo2 returns demo1 data'.The '*');
      }, false);
      </script>
      Copy the code

46. What processes does the browser have and what threads are in the renderer process

  • Why the browser is multi-process: Ensure that the browser is robust. If the browser is single-process, the crash of any web application will cause the crash of the entire browser.
  • What processes the browser has
    • Browser main process:
      • 1. Responsible for browser page display (forward and backward bookmark bars, etc.);
      • 2. Manage each page, create and destroy other processes,
      • 3. Draw the final bitmap obtained by the rendering process to the user interface
      • 4, network resource management download
    • Plug-in process: Each type of plug-in corresponds to a process, which is created only when the plug-in is used
    • GPU process: A maximum of one GPU process can be used for 3D drawing
    • Render process: multi-threaded, each TAB page is a process, each other (so we open the browser will find the browser main process and render process), mainly responsible for page rendering, script execution, time processing
  • Threads in the renderer process
    • GUI rendering thread:
      • 1. Responsible for rendering pages, parsing DOM and CSS files, and constructing DOM trees and CSS trees to synthesize render trees
      • 2. The GUI rendering thread gives the final bitmap to the browser main process for display
      • 3. The thread executes when the page redraw returns
      • GUI thread and JS engine thread are mutually exclusive. When the JS engine thread is executed, the GUI thread is suspended, and its update is saved in the queue, and the JS engine thread is idle to execute (mainly because JS can also operate DOM, avoid conflict with JS operation DOM), so the JS execution time is too long and will block the page rendering
    • Js engine thread:
      • 1. Responsible for parsing and processing JS codes and executing JS codes (V8 engine)
      • 2. Is there only one JS engine thread running on a TAB page at any time
    • Event trigger thread
      • 1, mainly used to control the event loop (JS engine thread is too busy for a person, the browser needs to open the thread assistance)
      • 2. Asynchronous events are added to the event-triggering thread when there are asynchronous events
      • 3. When the event triggering thread conditions are met (timer task, click event, Ajax request), the event triggering thread will put the callback function of the event into the task queue and wait for the JS engine thread to call (after the main thread code runs, check the task queue and take out the task queue code to execute).
    • Timer thread
      • 1. The thread where setInterval and setTimeout reside
      • 2. Responsible for timing. After timing, add the callback function to the task queue and wait for the JS engine to call
      • 3, the JS engine is a single thread, js code may be blocked, so the JS engine thread cannot maintain the timer
    • Asynchronous request thread
      • The XMLHttpRequest connection is made by opening a new thread in the browser
      • 2. When the state change is detected, if there is a callback function, put the callback function in the task queue to wait for the JS engine thread to call

47, from entering the URL to the final page rendering

The current article has exceeded the word limit, this part will move on to my other article from entering URL to page rendering and page rendering optimization (must know and will).

48, page rendering optimization

The current article has exceeded the word limit, this part will move on to my other article from entering URL to page rendering and page rendering optimization (must know and will).

49, GET/POST difference

  • 1, get/ POST semantically, get means to get data, post means to deliver data to the server.
  • 2. Get request parameters usually follow the URL, and POST request parameters are usually placed in the body
  • 3. Because browsers have limits on URL length, there is a limit on the length of get parameters. Generally, the entire URL + parameter should be within 2KB, while the POST parameter is placed in the body and there is no limit on the length of parameters
  • 4. Get request parameters are placed in THE URL, which is visible, so it is not suitable for requests containing sensitive information parameters. Post parameters are placed in the body, which is more secure than GET, but not absolute.
  • 5. Get request parameter encoding allows ONLY ASCII(e.g.? A = Will Cassady turn into? A =%E5%8D%A1%E8%90%A8%E5%B8%9D), and the POST request parameter has no encoding limit.
  • Post requests send data twice, sending the header first, returning 100, and then sending the body (the POST argument). Get requests do not send data only once. (Personally tested in Chrome, post requests only send data once… , maybe different browsers have different representations.
  • 7, there is also said to backward refresh the cache on other, I have not encountered, will not write, need detailed understanding can see the following thank you reference article.
  • Ps: In fact, there is no essential difference in post/ GET mode, which is based on TCP protocol for data transmission, but we specify different packet formats between the two. For example, the GET parameter is placed after the URL, and the POST parameter is placed in the body. However, as long as we negotiate with the server where the data is, it is ok to put the GET parameter in the body.

*

Thanks for the reference article

  • Http2 profile
  • Reveal http2
  • HTTP request/response packet structure
  • Network from transport layer to application layer
  • DNS is analysed
  • Transport Layer For Network Protocols
  • TLS details the handshake process
  • This article introduces you to man-in-the-middle attacks and HSTS
  • Brief analysis of CDN principle
  • Introduction of cookies
  • On the V8 Engine
  • What is the relationship between Unicode and UTF-8/16
  • TCP checksum
  • How does TCP ensure data reliability
  • How to ensure the flow control and congestion control of TCP protocol reliability
  • A guide to understand the TCP “sliding window” protocol
  • TCP’s four types of timers
  • TCP fast retransmission mechanism
  • Are you still worried about TCP retransmission, sliding Windows, flow control, congestion control?
  • Selective retransmission of ARQ protocol
  • TCP SACK option description
  • Fast retransmission and fast recovery algorithm
  • Chapter 10 Sliding Window protocol -1
  • Ch5-l13 TCP- Error control – Flow control
  • TCP half-connection and full connection queues
  • Browser Rendering overview
  • Layered context rendering layers Compound layers (hardware accelerated) distinguish and relate
  • GET vs. POST
  • Explain the HTTP protocol

27324