Before programming for interviews, the backend system was too long, so I decided to split it out and write something more network oriented and server oriented.

🍉 : The answer is not written yet.

🍊 : Involves algorithms and handwritten code.

🔔 : questions I actually encountered during the interview

This directory

  1. Front-end engineering
  2. Browser + Network + communication + security
  3. NodeJS
  4. other

React and JS are related to portal

Front-end engineering

🔔Webpack workflow

  • Recursively resolves all modules that entry depends on, starting from the module configured in entry
  • Every time a Module is found, it will find the corresponding conversion rule based on the configured Loader
  • After converting a Module, parse out the modules that the current module depends on
  • These modules are grouped by entry, and an entry and all its dependent modules are assigned to a Chunk group
  • Finally, Webpack converts all chunks into files for output
  • Webpack executes the logic defined in the Plugin at appropriate times throughout the process

🔔🍉 Configuration of Webpack

🔔🍉 Webpack packaging optimization

The loading sequence of loader and Plugin is incorrect

Loader executes from back to front because WebPack uses functional programming, so loader is executed from right to left. If WebPack uses PIPE, loader is executed from left to right

What is the difference between Loader and Plugin?

  • Loader is essentially a function that converts the received content and returns the converted result. Since Webpack only knows JavaScript, the Loader acts as a translator, preprocessing the translation of other types of resources.
  • Plugin is a plug-in, plug-ins can extend the functions of Webpack, in the life cycle of Webpack will broadcast many events, Plugin can listen for these events, at the appropriate time through the API provided by Webpack to change the output.

About Babel

  1. What is Babel? What does it do?
    1. Babel is an ES6 transcoder that can convert ES6 code into ES5 code for compatibility with platforms that do not yet support ES6
    2. Note: Babel is a JS compiler, but it only translates syntax, not native objects and some new apis (e.g., set, Promise), so polyfill was introduced
  2. How do I use Babel
    1. The configuration file for Babel is.babelrc, which is stored in the root directory of the project and configures transcoding rules and plug-ins. (Official provided rule set @babel/ PRESET -env)
    2. Basic configuration: {” presets “: [],” plugins “: []}
    3. Some important modules
      1. The @babel/register module overwrites the require command by adding a hook to it. Since then, whenever files with.js,.jsx,.es, and.es6 suffixes are loaded using require, Babel will be transcoded first.)
      2. @babel/core(use Babel API for transcoding)
      3. Babel/Polyfill (to convert new API)
  3. How does Babel transform
    1. Parsing: Parsing old code into an AST abstract syntax tree
    2. Transform: Convert to a new abstract syntax tree according to the rules defined by the Babel plug-in given by the developer
    3. Generation: Generates code based on the new abstract syntax tree

Principle of 🍉 treeshaking

🔔 modular

-commonjs: a file for a module; Using exports. XXX =... Or the module exports = {... } Expose module; Use the require (...). Method to introduce a module; require(...) Yes Synchronous execution. - AMD: Use define(...) Define a module; Use the require (...). Load a module (the same method name as the CommonJS specification, notice the difference between CommonJS and RequireJS; Dependencies are front-loaded and executed ahead of time. -cmd: a file as a module; Using the define (...). Define a module (similar to AMD); Use the require (...). Load a module (similar to AMD) Execute as lazily as possible (unlike AMD). -umd: checks whether AMD is supported and whether CommonJS is supported. If neither is supported, use global variables. -es Module (ESM): import modules with import keyword or import(...) Methods; Expose module with export keyword;Copy the code

AMD relies on front-loading, CMD relies on proximity.

CommonJS vs. ES6 modularity differences

  1. CommonJS supports dynamic import, i.e. Require (${path}/xx.js), which ES6 does not currently support, but has been proposed.
  2. CommonJS is a synchronous import, ES6 is an asynchronous import.
    • CommonJS because it is used on the server, the files are local, and synchronous import has little impact even if the main thread is blocked.
    • ES6, because it is used in the browser, requires downloading files. If synchronous import is also adopted, it will have a great impact on rendering.
  3. The CommonJS module prints a copy of the value, the ES6 module prints a reference to the value.
    • The CommonJS module prints a copy of the value, meaning that once a value is printed, changes within the module do not affect that value. On the other hand, if the exported value changes, the imported value does not change, so if you want to update the value, you must re-import it.
    • The ES6 uses real-time binding. The imported and exported values point to the same memory address. Therefore, the imported value changes with the exported value.
  4. CommonJS modules are loaded at run time, ES6 modules are loaded at compile time.
    • A CommonJS module is an object that loads the entire module at import time, generates an object (which will only be generated after the script runs), and then reads methods from that object. This loading is called “runtime loading.”
    • An ES6 module is not an object, and its external interface is a static definition. The module is loaded in the static parsing phase before the code runs (i.e., at compile time), which is more efficient than CommonJS module loading.
  5. The ES module allows static analysis, enabling optimizations like tree-shaking, and provides advanced features such as circular references and dynamic binding.

The front-end routing

Divided into two

hash

  1. Hash uses # concatenation
  2. Using window.hash
  3. Hash changes do not trigger a page refresh
  4. You can do this by listening for the Hashchange event
  5. Execute different callback functions for the changed hash value
  6. Browser support is better

history API

  1. Is a new API for HTML5, available through window.history
  2. There are several major apis: pushState, replaceState, Go, back, and Forward
  3. By listening for popState events, you can listen for operations triggered by users, such as Go, back, and forward
  4. If you want to listen for pushState and replaceState, you need to do an additional implementation and override the listener

Browser + Network + communication + security

🔔 The process from entering the URL to rendering the page

  1. Find the cache
  2. DNS resolves urls to IP addresses
  3. Establishing a TCP connection (three-way handshake)
  4. Start the communication
  5. Render the page DOM tree
  6. Close the TCP connection (four waves)

A few key knowledge points of middle extension

1. Browser caching mechanism

- Force caching takes precedence over negotiation caching. - If the cache is forced to take effect, the cache is directly used. If the cache does not take effect, the cache is negotiated. - Negotiation cache The server determines whether to use the cache. - If the negotiation cache is invalid, the request cache is invalid, and the request result is obtained again and stored in the browser cache. - If it takes effect, return 304, continue to use cache Look up the result of the request in the browser Cache and decide whether to use the -control-enforced Cache based on the result's Cache rules. The fields are Expires and cache-control, respectively. Cache-control has a higher priority than Expires. - Cache-control value - public: All content will be cached (both client and proxy servers can Cache). - private: Cache-control specifies the default value of cache-control. -no-cache: specifies the content cached by the client, but whether to use Cache requires negotiation. -no-store: specifies the content cached by the client. -max-age = XXX (XXX is numeric) : The cached content will expire after XXX seconds 2. Negotiation cache: After the cache is forcibly invalidated, the browser sends a request to the server with the cache id. The server decides whether to use the cache based on the cache ID. - The negotiation cache fields are as follows: Last-modified/if-modified-since and Etag/if-none-match, where Etag/if-none-match has a higher priority than last-modified/if-modified-since.Copy the code

2. DNS domain name resolution process

- Check the browser cache. - Check the OPERATING system cache, such as the hosts file. - Check the router cache - query with the ISP's LDNS server - and finally start the resolution layer by address starting with the root domain nameCopy the code

2.1 DNS Security Issues

3. Three handshakes

- First handshake: When establishing a connection, the client sends a SYN packet (SYN = J) to the server and enters the SYN_SENT state, waiting for confirmation from the server. - Second handshake: After receiving a SYN packet, the server must acknowledge the client's SYN (ACK = J +1) and send a SYN packet (ACK = K). In this case, the server enters the SYN_RECV state. - Third handshake: After receiving the SYN+ACK packet from the server, the client sends an ACK packet (ACK = K +1) to the server. After the packet is sent, the client and the server enter the ESTABLISHED state (TCP connection is successful) to complete the three-way handshake. - After completing the three-way handshake, the client and server begin to transfer data.Copy the code

4. Wave four times

- The first wave is when the browser sends a FIN request to disconnect data. - The second wave is when the server sends an ACK to indicate consent. It would not be wrong if the server also sent a FIN request to disconnect this time, but considering that the server may still have data to send, the server sends the FIN in the third wave. - The browser will need to return an ACK for approval, the fourth wave.Copy the code

5. Browser rendering

- The browser uses HTMLParser to parse HTML into a DOM Tree based on the principle of deep traversal. - The browser uses CSSParser to parse CSS into CSS Rule Tree (CSSOM Tree). - The browser parses the JavaScript code through the DOM API or the CSSOM API and applies it to the layout, rendering the results of the response as required. - After the DOM tree is established, construct the internal drawing model according to the CSS style, and generate the RenderObject tree - Construct the RenderLayer tree according to the web page hierarchy, and construct the virtual drawing context - painting: Traverse the Render Tree and mobilize the hardware graphics API to draw each node. - Layout: or Reflow - When the geometry of any node in the Render Tree changes, the Render Tree rearranges the location of all nodes on the screen. - repaint: Render Tree repaints when any element's style attributes (geometry not changed) change, such as font color, background, etc.Copy the code

6. Page rendering optimization

- HTML document structure level, style structure level as simple as possible; - Script as far as possible after, before can be; - Reduce DOM operations and cache DOM lookups, and try to cache DOM style information to avoid excessive backflow; - Reduce the use of JavaScript code to modify the style of elements, try to modify the class name style or animation; - Hide out of the screen, or when the page is scrolling, try to stop animation; - Enable domain name pre-resolution for websites with multiple domain namesCopy the code

About HTTP and HTTPS

  • HTTP: an application layer protocol based on TCP
  • HTTPS: establishes TLS/SSL encryption layer over HTTP and encrypts transmitted data. HTTPS is the secure version of HTTP.
  • Common packet field
    1. The request message
      • Accept: The media type that the client can handle (usually text/ HTML, application/ JSON)
      • Host: indicates the name of the Host that accesses resources
      • Referer: Which page the request was originated from
      • User-agent: sends information such as the browser that initiates the request and the name of the proxy to the server, for example, user-agent: Mozilla/5.0 (Linux; The Android 5.0. Sm-g900p Build/LRX21T) AppleWebKit/537.36(KHTML, like Gecko) Chrome/63.0.3239.84 Mobile Safari/537.36
      • There are also several cache-relevant headers: if-match, if-modified-since, if-none-match, if-range, and if-unmodified-since
    2. The response message
      • ETag: An identification of an entity resource that can be used to request a specified resource.
      • Retry-after: The server tells the client how long it will take to Retry. This is used with 503 and 3XX redirection replies.
      • Server: tells the Server about the HTTP Server application currently in use.
    3. Entity head field
      • Allow: informs the client of the request method supported by the server. However, when the server receives an unsupported request Method, the server responds with 405 (Method Not Allowed).
      • Content-encoding: Tells the client the Content Encoding of the resource.
      • Content-language: Tells the client the natural Language used by the resource.
      • Content-length: tells the client the Length of the resource
      • Content-location: tells the client where the resource is located.
      • Content-type: indicates the media Type of the resource. The value is the same as Accept in the request header field.
      • Expires: Tells the client the expiration date of the resource. Can be used for caching processing.
      • Last-modified: Indicates the time when the client resource was Last Modified.
    4. Generic message field
      • Cache-control: controls the Cache behavior.
      • Connection: manages persistent connections. If the value is keep-alive, long connections can be realized.
      • Date: indicates the Date and time when the HTTP packet is created.
    5. Other message fields
      • Cookie: a field in a request packet. Cookies are added during a request to record the HTTP status.
      • Set-cookie: indicates a reply packet field. This field is used when the server passes Cookie information to the client.
  • Figure: TCP/IP four layer network model

🔔HTTPS Handshake process

  • The client initiates the first handshake. The purpose of this handshake is to obtain the digital signature certificate from the server. Before sending the certificate, the server must confirm the SSL version and encryption algorithm of the client.
  • After completing the first handshake, proceed to the second handshake. The second handshake is initiated after the client receives the certificate. The purpose of the handshake is to send the pre-master secret (Key) used for AES encryption and decryption to the server. Of course, the AES_KEY is encrypted using the public key obtained during the first handshake. The client receives the AES_KEY encrypted using the public key and decrypts it using the server’s private key. In this way, both the client and server hold AES encryption and decryption keys after the second handshake
  • After both the Client and Server hold aes_keys, HTTP packets can be encrypted and decrypted. It’s not RSA anymore, it’s symmetric encryption, and even if it’s hijacked by a third party, the third party doesn’t know the password. Unless one of them gives the password to a third party. Symmetric encryption is also used to improve HTTPS performance. The time consumed by HTTPS itself is not negligible.

The advantages of 🔔 HTTPS

  • Why: HTTP protocol uses plaintext to transmit information, which has risks of information eavesdropping, information tampering and information hijacking. TLS/SSL protocol has functions of identity authentication, information encryption and integrity verification to improve security.
  • Main Functions of HTTPS
    1. Encrypt data and establish an information security channel to ensure data security during transmission
    2. Real identity authentication for the web site server
  • Implementation process: TLS/SSL functions mainly rely on three basic algorithms: Hash Hash, symmetric encryption and asymmetric encryption. Asymmetric encryption implements identity authentication and key negotiation. Symmetric encryption uses negotiated keys to encrypt data and verifies information integrity based on the Hash function.

The HTTP status code

2XX(Success Status code)

  • 200: indicates that the request from the client is successfully processed on the server
  • 204: Successful processing, but no resources to return. This is usually used when only information needs to be sent from the client to the server, and no new information content needs to be sent to the client
  • 206: Some content servers successfully processed some GET requests

3XX(Redirection status code)

The 3XX response results indicate that the browser needs to perform some special processing to properly handle the request

  • 301: Permanent redirect. This status code indicates that the requested resource has been assigned a new URI and that the URI to which the resource now refers should be used later. That is, if the URI corresponding to the resource is already bookmarked, it should be saved again as indicated in the Location header field
  • 302: Temporary redirect. The server currently responds to requests from web pages in different locations, but the requester should continue to use the original location for future requests
  • 303: View other locations. This status code indicates that because another URI exists for the requested resource, the GET method should be used to GET the requested resource.
  • 304: Unmodified. This has nothing to do with redirection. Indicates that the requested page has not been modified since the last automatic request. The server returns this response, not the content of the web page
  • 307: Temporary redirection servers currently respond to requests from web pages in different locations, but requesters should continue to use the original location for future requests

4XX(Client Error Client Error status code)

  • 400: Bad Request. The status code indicates that a syntax error exists in the request packet. When an error occurs, you need to modify the content of the request and send the request again. In addition, the browser treats the status code as if it were 200 OK.
  • 401: Unauthorized. Unauthorized request requires authentication. The server may return this response for a web page that requires login
  • 403: Forbidden: Forbids the server to reject requests
  • 404: Not Found, server could Not find the requested page
  • 405: Method Not Allowed, disables the Methods specified in the request. Access-control-allow-methods Displays the Allowed Methods

5XX(Server Error Server Error status code)

  • 500: Internal Server Error
  • 502: Bad Gateway, the server acts as a Gateway or proxy and cannot receive an invalid response from the upstream server
  • The server is currently Unavailable (due to overloading or downtime for maintenance). Usually, this is a temporary state
  • 504: The gateway timed out. The server acted as the gateway proxy, but did not receive the request from the upstream server in time

Key differences between HTTP1.0, 1.1, and 2.0

Differences between HTTP1.1 (1999) and HTTP1.0 (1996)

  1. HTTP1.0 mainly uses if-modified-since,Expires as a buffer control. HTTP1.1 introduces more cache-control policies such as Entitytag. If-unmodified-since, if-match, if-none-match, etc.
  2. Bandwidth optimization and network connection, HTTP1.0, there are some waste of bandwidth, such as the client only needs a part of an object, and the server will send the whole object over, and does not support breakpoint continuation function, HTTP1.1 is introduced in the request header range header field, which allows only a part of the resource request, The return code is 206 (PartialContent), which makes it easy for developers to choose freely to make full use of bandwidth and connections.
  3. Error notification management, HTTP1.1 added 24 error status response code, such as 409 (Conflict) indicates that the requested resource and the current state of the resource Conflict; 410 (Gone) Indicates that a resource on the server is permanently deleted.
  4. The Host header processing, in HTTP1.0, assumes that each server is bound to a unique IP address, so the URL in the request message does not pass the hostname. However, with the development of virtual hosting technology, there can be multiple virtual hosts (multi-homed Web Servers) on a physical server, and they share the same IP address. HTTP1.1 both Request and response messages should support the Host header field, and an error (400 Bad Request) will be reported if there is no Host header field in the Request message.
  5. HTTP 1.1 supports long Connections and Pipelining processing that delivers multiple HTTP requests and responses over a SINGLE TCP connection, reducing the cost and latency of establishing and closing connections. Connection: keep-alive is enabled by default in HTTP1.1, somewhat compensating for the fact that HTTP1.0 creates a Connection on every request.

The difference between HTTP2.0 (2015) and HTTP1.1

  1. The new BinaryFormat, http1.x parsing is text-based. There are natural defects in format parsing based on text protocol. There are various forms of text expression, and many scenarios must be considered in order to achieve robustness. Binary is different, only recognizing the combination of 0 and 1. Based on this consideration HTTP2.0 protocol parsing decision to adopt binary format, implementation is convenient and robust.
  2. MultiPlexing means that each request is used as a connection sharing mechanism. A request corresponds to an ID. In this way, a connection can have multiple requests. The requests of each connection can be randomly mixed together, and the receiver can assign the requests to different server requests according to the REQUEST ID.
  3. HTTP2.0 uses encoder to reduce the size of the headers that need to be transferred. The communication parties cache a table of header fields. This avoids duplicate header transmission and reduces the size of the required transmission.
  4. Server push. Like SPDY, HTTP2.0 has server push functionality.

The difference between keep-alive for HTTP1.x and multiplexing for HTTP2.0

  1. HTTP/1.* Once request-response, establish a connection, close when used up; A connection is established for each request
  2. HTTP/1.1 Pipeling (Pipeling) is a multi-threaded process that processes multiple requests in a serialized manner and blocks any subsequent requests that time out
  3. HTTP/2 Multiple requests can be executed simultaneously in parallel on a single connection. A request task is time-consuming and does not affect the normal execution of other connections

What are the benefits of multiplexing

  • The key to HTTP performance optimization is not high bandwidth, but low latency. TCP connections “tune” themselves over time, limiting the maximum speed of the connection at first and increasing the speed of the transfer over time if the data is successfully transferred. This tuning is called TCP slow start. For this reason, HTTP connections that are inherently abrupt and short become very inefficient.
  • HTTP/2 enables more efficient use of TCP connections by having all data flows share the same connection, allowing high bandwidth to truly serve HTTP’s performance gains.

About the cookie

🔔 How to obtain and set cookies

  1. Browser access:document.cookie
  2. Browser Settings:document.cookie='testkey=hahah; path=/; domain=.juejin.im'

Cookie common attributes

  1. name
  2. value
  3. domin
  4. path
  5. Expires/Max – age: Set the cookie expiration date. Expires is the absolute time and max-age is in seconds. If it is negative, it means temporary storage and cookie files will not be generated
  6. Secure: When this property is set to true, this cookie is transmitted only over secure protocols such as HTTPS and SSL
  7. HttpOnly: If this property is set to true, the cookie value cannot be obtained from JS scripts, effectively preventing XSS attacks

Precautions when using cookie, session, and Token authentication modes

Considerations when using cookies

  • Because the storage is stored on the client, it is easy to be tampered with by the client. Therefore, you need to verify the validity before using the storage
  • Don’t store sensitive data, such as user passwords and account balances
  • Using httpOnly improves security to some extent
  • Minimize the size of cookies. The amount of data that can be stored cannot exceed 4KB
  • Set the domain and path correctly to reduce data transfer
  • Cookies cannot cross domains
  • A browser can store a maximum of 20 cookies for a website, and a browser is generally allowed to store only 300 cookies
  • The mobile terminal does not support cookies very well, and the session needs to be implemented based on cookies, so the mobile terminal is commonly used by token

Issues to consider when using sessions

  • Sessions are stored on the server. When a large number of users are online at the same time, these sessions are stored on the server
  • Therefore, expired sessions need to be cleared periodically on the server
  • When a website is deployed in a cluster, session sharing among multiple Web servers is a problem. Because sessions are created by a single server, but the server that handles user requests is not necessarily the same server that created the session, that server cannot retrieve information such as login credentials that was previously put into the session.
  • When multiple applications want to share sessions, in addition to the above problems, cross-domain problems may also occur. Because different applications may deploy different hosts, cross-domain cookie processing needs to be done in each application.
  • Sessionids are stored in cookies. What if the browser disables cookies or does not support cookies? The sessionId is usually followed by the URL parameter, that is, the URL is rewritten, so the session does not need to rely on cookies to implement the session. Cookie support is not very good on the mobile end, and the session needs to be implemented based on cookies, so the mobile end is usually token

Considerations when using tokens

  • If you think that using a database to store tokens will take too long to query, you can choose to store them in memory. For example, Redis is a good fit for your token query needs.
  • The token is completely managed by the application, so it can bypass the same-origin policy
  • Tokens can avoid CSRF attacks (because cookies are no longer needed)
  • The mobile terminal does not support cookies very well, and the session needs to be implemented based on cookies, so the mobile terminal is commonly used by token

Cookie, LocalStorage and sessionStorage are different

  • The life cycle
    • Cookie: The expiration time can be set. If the expiration time is not set, the expiration time is closed by default
    • LocalStorage: will be permanently saved unless manually cleared.
    • SessionStorage: valid only in the current web session, will be cleared after closing the page or browser.
  • Data storage size
    • Cookie: about 4KB
    • LocalStorage and sessionStorage: can hold up to 5MB of information.
  • The HTTP request
    • Cookies: Are carried in HTTP headers each time. Using cookies to store too much data can cause performance problems
    • LocalStorage and sessionStorage: stored only in the client (browser) and do not communicate with the server
  • Ease of use
    • Cookie: needs to be packaged by the programmer, the cookie interface is not friendly
    • LocalStorage and sessionStorage: Source interfaces are acceptable and can be repackaged to provide better support for objects and arrays

GET and POST

  • GET is harmless when the browser falls back, while POST resubmits the request.
  • The URL generated by GET can be bookmarked, but not by POST.
  • GET requests are actively cached by browsers, whereas POST requests are not, unless set manually.
  • GET requests can only be url encoded, while POST supports multiple encoding methods.
  • GET request parameters are retained in browser history, while parameters in POST are not.
  • GET requests pass parameters in the URL with length limits, whereas POST does not.
  • GET accepts only ASCII characters for the data type of the argument, while POST has no restrictions.
  • GET is less secure than POST because parameters are exposed directly to the URL and therefore cannot be used to pass sensitive information.
  • The GET argument is passed through the URL, and the POST is placed in the Request body.
  • GET generates a TCP packet; POST generates two TCP packets. For GET requests, the browser sends both HTTP headers and data, and the server responds with 200 (return data). For POST, the browser sends a header, the server responds with 100 continue, the browser sends data, and the server responds with 200 OK (returns data).
  • GET requests to obtain specified resources. It is secure, idempotent, and cacheable. The packet body of GET does not have any semantics. POST processes a specified resource based on the packet body. POST is insecure (side effects), non-idempotent, and non-cacheable (in most cases).

Compare TCP and UDP

1. Based on connection and connectionless, UDP is connectionless, that is, no connection is required before sending data. 2. TCP ensures data correctness, UDP may lose packets, TCP ensures data sequence, UDP does not. That is to say, data transmitted through the TCP connection is error-free, not lost, not repeated, and in order to arrive; UDP best effort delivery, i.e., no guarantee of reliable delivery Tcp through checksum, retransmission control, sequence identification, sliding Windows, confirmation reply to achieve reliable delivery. For example, when the packet is lost, the sequence control can also be carried out for the subcontracting out of order. 3. UDP has better real-time performance and higher work efficiency than TCP. It is suitable for high-speed transmission and real-time communication or broadcast communication. 4. Each TCP connection can be point-to-point only. UDP supports one-to-one, one-to-many, many-to-one and many-to-many interactive communication. 5. TCP requires more system resources, while UDP requires less.Copy the code

🔔 Same-origin policy and several ways to cross domains

The same-origin policy

Same protocol, same domain name, same port.

JSONP

Invoking JS files on Web pages is not affected by the browser’s same origin policy, so Script notes can be used to make cross-domain requests.

  • First, the front end sets up the callback function as an argument to the URL.

  • When the server receives the request, it uses this parameter to get the name of the callback function and returns the data in the parameter.

  • After receiving the result, the browser will run it as a script because it is a script tag, so as to achieve the purpose of cross-domain data retrieval.

  • Pros: Good compatibility and runs well on older browsers; No XMLHttpRequest or ActiveX support is required; After the request is complete, the result can be returned by calling callback.

  • Disadvantages: It supports GET requests but not HTTP requests from other class lines such as POST; It only supports cross-domain HTTP requests and does not solve the problem of data communication between two pages or iframes of different domains.

CORS

According to the request method and header, simple request and non-simple request are classified. The two processing methods are different, and the headers carried and returned are also different

A simple request

  • methods: HEAD, GET, POST
  • headers: For CORS (CerS-Safelisted request-header), Content-Type (Application/X-www-form-urlencoded, multipart/form-data, etc.), content-type (Application/X-www-form-urlencoded, multipart/form-data, etc.), content-type (Application/X-www-form-urlencoded, Multipart /form-data, ETC.); text/plain)

If the two conditions are not met at the same time, it is a non-simple request.

  • For simple requests: just carry origin in the header,

  • For non-simple requests, the browser must first make a precheck request using the OPTIONS method to know whether the server will allow the cross-domain request.

  • CORS request header:

    • Origin = Origin
    • The pre-check Request must contain access-Control-request-method (the Method to be used in the Request) and access-Control-request-headers (the additional Headers to be carried by the Request).
  • CORS response header:

    • Access-control-allow-origin: allows cross-domain sources. If the value of access-control-allow-origin is *, cookies cannot be carried
    • Access-control-allow-credentials: allows cookies to be transmitted (the withCredentials attribute must also be enabled in Ajax requests)
    • Access-control-expose-headers: Other header fields required
  • Precheck request response header:

    • Access-Control-Allow-Methods
    • Access-Control-Allow-Headers
    • Access-Control-Allow-Credentials
    • Access-Control-Max-Age
  • Advantages: Based on the HTTP standard.

  • Disadvantages: Compatibility problem, support IE 10 or above.

postMessage

  • By embedding iframe to load the target Origin, using postMessage() and listening for message events to achieve data communication.
  • Parent page => child page pass data: parent page calls postMessage,iframe.contentWindow.postMessgae(JSON.stringify(data), 'http://www.domain2.com'), the child page listens for message events:window.addEventListener('message', function(e) { console.log(e.data) }
  • Child page => parent page pass data: child page call postMessage,window.parent.postMessage(JSON.stringify(data), 'http://www.domain1.com');, the parent page listens for message events

iframe + location.hash

  • Data communication is implemented by modifying location.hash and listening for hashchange events.
  • Parent page => child page to transmit data: you can modify iframe.src directly, and then hash the data to be transmitted after the URL. The child page listens for hashChange to obtain data
  • Child page => Parent page to transfer data: because the domain is different, the parent page cannot directly modify the parent page’s hash in the child page. In this case, a third page — the proxy page of the parent page’s domain, embedded in the child page, is needed to modify the outermost parent page’s hash:parent.parent.location.hash = self.location.hash...
  • Disadvantages: Data is directly exposed in the URL; Data type and size constraints.

iframe + window.name

  • Take advantage of the uniqueness of the window.name attribute: The name value persists after loading on different pages (even different domain names) and supports very long name values (2MB)
  • The SRC attribute of iframe is used to transfer the cross-domain data from the outfield to the local region. In this case, the window.name of iframe is used to transfer the cross-domain data from the outfield to the local region

iframe + document.domain

  • The element inside the iframe is obtained by setting the document.domain of both parent and child pages as their parent field
  • Disadvantages: This approach only works for interactions between frameworks in different subdomains

WebSocket

  1. What is the websocket
  • A persistent network communication protocol based on TCP connections
  • The server can actively push information to the client without the client repeatedly sending requests to the server
  1. What advantages do Websockets have over traditional HTTP
  • The client and server only need a TCP connection, less than HTTP long polling
  • The server can push data to the client
  • Lighter protocol header to reduce data transfer

Server Proxy

Nginx reverse proxy

CSRF and XSS

CSRF: Cross-site Request Forgery CORS: Cross Origin Resourse-sharing Cross-site resource Sharing XSS: Cross Site Scrit cross-site scripting attack

CSRF

Content:

  • Take the user’s cookie to gain browser trust

Means of defense:

  • Verification code
  • Referer Check
  • Token authentication

XSS

Content:

  • UGC information from the user
  • Links from third parties
  • The URL parameter
  • POST parameters
  • Referer (possibly from an untrusted source)
  • Cookies (possibly injected from other subdomains)

Means of defense:

  • HttpOnly prevents Cookie hijacking
  • User input check
  • Check the output of the server

About the Serverless

  • Definition: Serverless is not a specific programming framework, class library, or tool. To put it simply, Serverless is a software system architecture idea and method. Its core idea is that users need not pay attention to the state, resources (such as CPU, memory, disk and network) and quantity of the underlying server that supports the operation of application services

About Design Patterns

Design patterns are used to reuse code, make it easier to understand and maintain, and ensure code reliability. Make your code truly engineering

Common design patterns:

  • Creation: singleton, prototype, factory, adapter
  • Structural: Decorator mode, agent mode
  • Behavior: Observer mode, iterator mode

Specific explanation:

  • Factory [jquery] : Defines an interface for creating objects. This interface lets subclasses decide which class to instantiate. This pattern delays instantiation of a class to subclasses. Subclasses can override interface methods to specify their own object types when they are created
  • Singleton pattern [Redux’s Store] : a class has only one instance and provides a global access point to it.
  • Adapter pattern [Integrate third-party SDKS, encapsulate old interfaces, computed] : Convert an interface from one class to another, so that interface incompatibations between classes are solved through the adapter
  • The ES7decorator pattern wraps the original object without changing it, adding additional properties or methods to it — both the decorator class and the decorator class are decoupled from each other by caring only about their core business
  • ES6 Proxy mode
  • The observer mode should be considered when changes to one object require changes to other objects, and it does not know how many objects need to be changed
  • Stereotype pattern [inheritance] Create a shared stereotype and create new classes by copying this stereotype

Advantages and disadvantages GraphQL

Advantages:

  • On demand. Clients can fetch defined resources and data from the server as they want, rather than programming with the server’S BFF.
  • Code is documentation. The query written by GraphQL is more like a document than a parameter. In other words, it is suitable for human reading.
  • Easy to use API debugging tool. Most GraphQL implementations provide a front-end debugging API interface for development, API requests, validation, and so on.
  • Strongly typed API checking. Front-end – oriented interfaces are guaranteed by strongly typed Schemas to quickly locate problems.
  • An API for easy versioning. It can extend the API through Schema, whereas REST needs to receive versions through URIs or HTTPheaders, etc.

Disadvantages:

  • The HTTP request could not be cached. Since all HTTP requests can only be cached at the App level, through the GraphQL client library
  • Error code handling is not friendly. GraphQL returns a uniform result of 200, in which the error message is wrapped. For a traditional HTTP client, additional processing is required to go to the exception branch.

Should I use GQL or BFF? A: If the business is constantly changing or you need to provide an API externally, GraphQL is a better choice. However, if the business changes infrequently, or the client data volume is small (for example, only Web), then BFF can be used

How is the micro front end implemented?

What?

  • The single page front-end application from a single application into a number of small front-end applications together
  • Each front-end application can also be independently developed and deployed
  • They can also develop in parallel — via the NPM Git subModule

According to?

  • Application autonomy, single responsibility, independent development and deployment, improve development efficiency
  • Stack independent
  • Legacy System Migration

How?

  1. Route distribution. The request is routed to the application using the reverse proxy function of the HTTP server.
  2. Front-end microservitization. Design communication and loading mechanisms on top of different frameworks to load the corresponding application within a page.
  3. The application. By software engineering, multiple independent applications are combined into a single application in the deployment and construction environment.
  4. Micro parts. Develop a new build system that builds part of the business functionality into a separate chunk of code that only needs to be loaded remotely.
  5. Front-end containerization. Use iframe as a container to hold other front-end applications.
  6. Application componentization. With the help of WebComponents technology, to build cross – frame front-end applications. There are many ways to implement it, but they are all based on the scenario. In some cases, there may not be an appropriate approach; In some scenarios, you can use more than one solution at a time.

reference

  1. HTTP and HTTPS details
  2. Browser rendering process and page optimization
  3. Hyperdetailed HTTPS handshake and digital signature explanation
  4. Cookie, Session, Token, JWT
  5. The differences between HTTP1.0, HTTP1.1, and HTTP2.0