What’s the difference between HTTP and HTTPS?
Both are hypertext transfer protocols, but HTTPS adds an SSL encryption protocol. The default port for HTTPS and HTTP is 80, and HTTPS is 443.
What are the SSL encryption methods?
Symmetric encryption: Both encryption and decryption use the same key. This key is generated from a random number generated by the browser and the server, and then mixed into a key. The key is used to encrypt and transmit data.
Asymmetric encryption: Use both public and private keys. The browser sends the list of encryption suites to the server. The server selects an encryption suite and sends the public key to the browser. The browser uses the public key to encrypt data, and the server uses the private key to decrypt data.
Symmetric and asymmetric encryption use random numbers and public and private keys. The browser sends a client-random number to the server. The server returns a service-random number and a public key. The browser generates a pre-master random number and encrypts the number using the public key. The browser and the server have the same client-random, service-Random and pre-master, and generate the same key for symmetric encryption transmission.
What are the disadvantages of the above encryption methods?
Symmetric encryption: The algorithm for transmitting random numbers and synthesizing keys is plaintext. Attackers can also synthesize keys for decryption.
Asymmetric encryption: The server uses the private key to encrypt data and the browser uses the public key to decrypt data. However, the public key is in plaintext, so the data sent to the browser cannot be secure.
Symmetric and asymmetric encryption: In DNS hijacking, the browser does not know it is visiting the attacker’s site.
How to solve DNS hijacking problem?
Use CA authentication and let the site prove its own identity.
CA certification process?
First of all, the site needs to apply for its own CA certificate, and then who issued the certificate, which is the intermediate CA organization, up the certificate chain until it finds the root CA.
How do I maintain a long connection to the server?
Ajax polling: Ajax sends requests to the server at regular intervals to keep the data synchronized. Disadvantages are low efficiency and waste of resources.
Long poll: The connection of the request header needs to be keep-alive. After the client sends a request, if no data is returned, the server suspends the request and queues it until data is returned. The advantage is to reduce invalid network transmission; The disadvantage is that it cannot handle high-concurrency scenarios.
Iframe Long connection: Embed an iframe tag on a web page with its SRC pointing to a long connection request. The advantage is timely message transmission; The disadvantage is that it consumes server resources.
Websocket: two-way communication, only need to connect once, can transfer data to each other, suitable for timely communication, timely data update and other scenarios. Websoket protocol has nothing to do with HTTP. It is a new protocol based on TCP. In order to be compatible with HTTP handshake specifications, HTTP is still used in the handshake phase. Advantage is two-way communication, there is no cross-domain limitation, only to establish a connection; The disadvantage is that the long connection has a large network limitation, and the reconnection needs to be handled properly. Only Internet Explorer 10 or later is supported.
Websocket differs from HTTP:
- The URL starts with ws: or WSS:.
- The status code is 101.
- The connection value of the request header and response header is upgrade, indicating a protocol upgrade.
- The request header and response header will have sec-Websoket fields.
How is the WebPack build optimized?
Construction speed optimization: first of all, the speed-measure-webpack-plugin is used to analyze which parts are slow in packaging, which can be optimized accordingly.
Enable multi-threaded packaging:
- Use HappyPack or thread-loader to enable worker multithreading packaging. Use thread-loader instead of HappyPack.
Use caching to speed up the second build:
- Enable the babel-loader cache. Add the cacheDirectory=true field to the end of the loader. Enable the cache of the compression plug-in.
- To enable the caching of compression plug-ins, terserPlugin is recommended.
- Enable disk caching, which is the most obvious speed boost, using the hard-source-webpack-plugin.
Reduce file search scope:
- Exclude: /node_modules/ and exclude NPM packages.
- Configure aliases to find aliases faster.
Package volume optimization: First of all, analysis. Using the webpack-bundle-Analyzer plugin, open a visual package volume display to see which part has a large volume, use dynamic polyfill if polyfill is too large, use CDN to extract common packages, etc.
Using compression plugins:
- terser-webpack-plugin
Precompiled subcontracting:
- To use the built-in plug-in webpack.DLLplugin, you need to create a separate configuration file webpack.dll.config.js, which will generate a manifest.json file corresponding to the packaged name and package information. If you need to use it, . Also used in the production configuration webpack DllReferencePlugin to require reference, reference before entry to the DLL.
- To enable image compression, use image-webpack-loader to add image compression after image file-loader is parsed
What happened from entering the browser url to rendering the page?
First the browser determines whether the input is a url or not. If it is a keyword it calls the default search engine. The url first executes the beforeonLoad event for the current page. The browser process then sends the URL to the network process to initiate the URL request.
Before the request is initiated, it will attempt to hit the cache. First, it will hit the strong cache. It will read the cache from the service worker, memory, hard disk, and push cache in sequence. If no match is found, the request is normally initiated. First, the DNS resolves the domain name to an IP address and finds the corresponding host.
It then initiates a request for the resource and, if the protocol is HTTPS, establishes a TLS connection. A HTTP request before need to establish a TCP connection, the reliability of the three-way handshake to ensure that both sides communication after a HTTP request, this time will try to cache hit consultation, if missed sending HTTP requests, and request the head, body, waiting for the server processing and response, if return a status code is 301/302, redirect, Re-initiate the navigation process. Otherwise, the browser responds to the specific content based on the content-Type type. After the browser receives the resource, it disconnects the TCP connection four times. If the connection: keep-alive function is enabled, the TCP connection is maintained and the browser enters the page rendering phase.
A brief description of how the browser renders the response data?
The process can also be called a rendering pipeline:
-
The first step is to build a DOM tree, parse labels into tokens one by one, and push them into the pop-up Token stack to confirm the parent-child relationship of labels, so as to build a tree structure.
-
The next step is to build CSSOM, parse link tags, style tags and inline CSS into styleSheet, standardize attributes, such as converting EM to PX, color keyword to RGB value and attribute keyword to specific value, traverse the DOM tree to match corresponding CSS attributes through inheritance and cascading rules. Let each DOM have its own style attribute, as you can see in elElemt-computed in the browser.
-
The CSSOM and DOM attributes are then computed and recalculated to a layout tree containing only visible elements, with hidden elements not participating in the layout calculation.
-
The next step is layering. For example, elements that are absolutely positioned can mask other elements because they are not on the same layer, and the layout tree needs to be calculated again as a layer tree containing a hierarchy relationship.
-
Then the next is the drawing, according to the layer relationship gradually draw, draw the bottom and then draw the upper layer, in order to form a list to draw.
-
The first step is to draw the layers within the viewport of the browser, so there is a chunking step. Divide the layer into smaller pieces, and draw the pieces inside the viewport first.
-
Next comes rasterization, which uses the GPU to render the current viewport block as a bitmap and saves it in the GPU memory.
-
When rasterization is complete, the browser process is notified to display the bitmap in GPU memory on the screen.
Rearrangement: Changing the geometry of elements via JS or CSS triggers rearrangement, rerendering the entire pipeline from the layout stage.
Redraw: Change the appearance of elements through JS, trigger redraw, and start the rendering pipeline from the drawing stage, eliminating the layout and layering stages.
Compositing: Do not change the layout and appearance properties, such as transform animation, complete the animation operation directly in the compositing thread, the WILL-change property of CSS, inform the browser in advance of the need for CSS animation, prepare a separate layer for it, and then complete the animation through the compositing thread.
How to reduce backflow and redraw?
Transform is used to replace displacement, and GPU acceleration is enabled with Translate3D
Use visibility instead of display: None whenever possible
Do not read node property values in a loop
The faster the animation, the less backflow
Opacity :0, visibility:hidden, display:none?
Opacity: 0. Elements will be hidden, they will appear in the layout tree, they can be clicked, they will take up the size of the original space, they can also affect the child elements.
Visibility: hidden. The element will be hidden, it will appear in the layout tree, it will not be clickable, it will take up the size of the original space, the child element will inherit, and you can change the value of that property.
Display: none. Elements are hidden, do not appear in the layout tree, cannot be clicked, and do not occupy their original space, affecting child elements.
What are the differences between different versions of HTTP?
Each HTTP version update addresses issues left over from previous versions.
The first was HTTP-0.9, which mainly solved the problem of text information can be transmitted on the Internet, the main problem is that only support HTML text transmission, only support GET request, no request header request body, the server has no response header information;
The emergence of HTTP-1 increased the request header and response header, increased the status code, response resources are not limited to HTML, increased get, POST, HEAD request, the main problem is still stateless;
Then came HTTP-1.1, which added connection: keep-alive to allow TCP requests to be reused. Added Content-range for range requests. There are still problems, TCP slow start, multiple TCP will compete for bandwidth, HTTP queue head blocked, a request is not finished, will be blocked behind;
Then http-2, increase the binary frame layer to support multiplexing, only use a TCP long connection to transmit data, only experience a TCP slow start, also avoid multiple TCP competing for bandwidth, concurrent requests to solve the problem of queue head blocking; Request header and response header compression are also added; The server actively pushes. There is also a new problem, TCP packet header blocking, because only one TCP is used, if the packet loss rate reaches 2%, performance is less than 1.1.
Finally, the latest HTTP-3, based on UDP protocol to achieve THE QUIC protocol, TCP transmission reliability, integrated TLS encryption function, solve the TCP queue head blocking, increase the quick handshake.
Does Webpack pack the build process?
When executing the command package, you will find entry documents, namely node_modules/webpack/bin/webpakc js, first checks to see if there is to install the official scaffold work, namely webpack – cli or webpack -command one of them, If there are none, it prompts you to install, and if two ports are installed, remove one of them. When scaffolding is normal, the scaffolding file first needs to be handled by merging the configuration items in the configuration file with the subsequent configuration in the command line, command line first, into a Webpack content aware configuration. Pass this configuration into a WebPack instance, return a Compiler object, and execute its run method to start compiling.
Compier objects in turn inherit from tapable class, which is an eventema-like eventema-centric library. When you need to listen for the trigger of certain nodes in the webPack life cycle, you can register the corresponding hook function in the plug-in.
When the Compiler object is instantiated, the plug-ins built into WebPack are initialized first, their instances are hung on the Compiler object, the Compiler object is passed into the Apply method of each plug-in, and then the run method is executed to develop and compile. Executes an internal compie method that instantiates the compilation object, which is responsible for the compilation process.
Execute addEntry of this object to find the entry file, then execute buildModule, execute the factory method of building the module, use Loader-Runner to execute the Loader in the configuration file for different files, use Acorn to generate the AST of the files handled by the Loader and iterate through it, When a require dependency is encountered, add it to the dependency array, recursively perform the procedure on its dependency, and execute the seal method when finished.
Optimization is then performed to optimize the code. Finally, the generated code is saved to cimpilation. Assets, and the final file is output to the path of Output through emitAssets.
What about the Vuex principle?
Vuex’s install method is executed when vuue. Use is executed, and a global mixin is applied to the whole world with only the beforeCreate attribute, which gives each component access to the this.$store attribute.
New vuex.store will format the incoming configuration and recursively register the state, getters, mutation, and actions properties of each module. Put the getter, action, and mutations for each module into an object, add the module name to the corresponding key, and put state into an object with a subordinate relationship.
Commit and Dispatch are overwritten internally, and the module name is automatically prefixed to commit and Dispatch when the current module triggers a state change. Finally, some map starting syntactic sugar usage is provided.
What is the vue-Router principle?
This mixin has only one beforeCreate attribute, which gives each component the properties this.$router and this.$route. When the beforeCreate hook is executed, the Init method of the Router instance is executed, and the global components router-view and router-link are registered.
When new VueRouter is executed, the router configuration will be built into a mapping table of routing components, the component corresponding to path and the component corresponding to name, and then the internal initialization of a history property, a history class, and three subclasses derived from it. Histiry instantiates different subclasses of mode, including transitionTo, push, replace, and so on.
After registration, the global injected beforeCreate hook is executed. The init method of the router instance is executed internally, and the transitionTo method is executed internally. This method handles navigation, URL changes, and router-view rendering. The navigational guard is first processed, the new path is calculated, and all navigations are placed in a queue in the order in which they are executed, such as component deactivation hooks, global before hooks, route update hooks, etc., and the internal runQueue method is executed. The inside of each hook needs to execute next to execute the next navigation hook.
The next step is to update the URL, replace the content after the existing path # with the target fullPath to generate a new URL, and then call the browser’s pushState to push the record onto the history stack.
As router-view is used for nesting, the nesting depth is calculated first. If the parent component of a component is rendered by router-View, the depth value will be +1, and a path will be placed in the matched array from parent to child. Thus, the corresponding components can be accurately rendered according to the depth value. Finally, h method of render function is performed to render components.
Why does webpack’s loader execute from right to left?
The compose function is composed, which first executes the function on the right and then adds the result to the left. And each loader is actually a function, input is the loader re matching file, output is a Webpack module, and this module will be passed to the next loader to continue processing.
compose = (f, g) => (... args) => f(g(... args))Copy the code
How to write a Webpack loader?
Loader is a function that takes the source code of the file matched by test and the value of the options object passed in by loader. This function processes the source code of the file passed in, and then exports a JS module to subsequent loaders to use. Export can be preceded by export default code, or callback(null, code).
How to write a webPack plug-in?
The function of the plug-in is that it can be used in the collation and construction life cycle. When writing the plug-in, you need to export a class and have an apply method. The first parameter of this method is the object of compiler, which is inherited from tabpad. Then listen for various hooks during the build process to determine the timing of execution, such as the emit hook for file writes, setting files to compiler. assets, and other make/run hooks.
How about JS garbage collection?
The problem starts with the type of the variable. If the variable is of value type, they are stored directly in the call stack. When the call stack pops up, the memory occupied by the variable is reclaimed. In the case of a reference, the call stack stores Pointers to the heap. When the call stack is ejected, the stack space is cleared for collection, and the space occupied by the heap needs to be garbage collected.
The memory in the heap is divided into new generation and old generation. The new generation stores newly added objects and uses the sub-garbage collector. The old generation stores long-lived objects and uses the main garbage collector. The recycling mechanism of the collector is to mark active and inactive objects, and the memory of inactive objects is first reclaimed after the marking is completed.
Subgarbage collector: The Scavenge avenge avenge avenge avenge avenge avenge avenge avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge Avenge After the replication is complete, the roles of the two regions are reversed. The object region becomes the free region, and the free region becomes the object region. If an object has undergone two flips and is still in stock, it will be moved to the old area.
Main garbage collector: The main garbage collector adopts the mark-clean algorithm for garbage collection (reference counting is not the mainstream collection mechanism). The marking stage is traversing the call stack. The objects that can be accessed are active objects, and the garbage data that cannot be accessed is cleared.
How is XSS protected?
The main hazards of cross-site scripting are as follows:
- The main harm is cookie theft.
- Listen for user keyboard events.
- Modify the DOM forgery login window to obtain the entered user name and password.
- Generate page floating window ads.
Attack modes include storage, reflection, and DOM-based malicious script injection:
Storage type: malicious scripts are submitted to the database in the form of input (such as names and comments). When users read malicious scripts, they request scripts from distant places and send user information to the malicious server.
Reflection: simulates the user’s input, concatenates the malicious script address into the URL’s query parameter, and executes it on the user’s URL. The attack is to induce users to click on some malicious links that have been spliced together.
DOM: Modifies web page data through WiFi hijacking or local software.
Prevention methods:
- Filter keywords, such as left Angle brackets < to < and right Angle brackets > to >.
- Sensitive cookie information is set to httpOnly.
- Use CSP content security policies to restrict resource files in other domains; Prohibit submission of data to third-party domains; Prohibit the execution of inline scripts and unauthorized scripts. Provide a reporting mechanism to discover XSS vulnerabilities as soon as possible.
How to prevent CSRF?
Cross-site request forgery, which exploits a user’s login status to initiate a request.
Attack mode:
- Automatically initiate a GET request: for example, using the SRC attribute of the IMG tag to simulate a resource request for an image, but actually initiate a GET request.
- Automatic post request: Build a hidden form, and the user accesses the auto-submit form, perhaps a transfer operation.
Prevention methods:
- Set the SameSite property of cookie, set the cookie carry, Strict only allows same-origin carry; Lax is carried for get requests; None is carried at any time.
- Verify the source site of the request, verifying the Referer and Origin attributes.
- CSRF Token: A request must carry a CSRF Token. If a request from a third party does not carry a CSRF Token, the request is rejected.
How to optimize performance?
We mainly start from two aspects:
A, faster loading:
- Reduce the number of requests: route lazy loading, image lazy loading, Sprite image, small image transfer
base64
. - Reduce the RTT(TCP request packet round trip delay, a packet is 14K) times, the CSS or JS is inlined on the first screen, static resources use CDN, shorten the RTT time.
- Reduced single request time: code compression; Use import/export syntax to make tree-shaking work; Use image-webpack-loader to compress the image.
- HtmlWebpackTagsPlugin is used to extract the public resource package and introduce it in the form of CDN.
- Use the browser’s strong cache and negotiated cache wisely.
Two, render faster:
- CSS file at the top, JS at the bottom, reasonable use of asynchronous loading JS, prevent rendering blocking.
- Incorporate DOM operations to reduce backflow and redraw triggers, use transition3D for shift operations, and enable hardware acceleration.
- Throttling or stabilization of frequently triggered events.
- Assign computing tasks for the script to Web Workers.
- Avoid frequent creation of temporary reference type variables and reduce garbage collection times.
What are the front-end performance indicators?
- First Paint (FP) : the First frame of non-background content drawn by the browser.
- FCP(First Contentful Paint): The time when the browser draws the First frame of text, image, etc.
- Speed Index (LCP) : first screen time.
- Fisrt CPU Idle (FI) : indicates the Idle time of the first CPU and the minimum interaction time of a page.
- TTI(Time to Interactive) : Indicates the Time of complete interaction.
- Max Potential First Input Delay (MPFID) : indicates the time required to respond to user Input when the page is busiest.
How to optimize the above indicators?
FP: The renderer generates a blank page before parsing the HTML, and the node that creates the blank page is called FP. If this indicator takes too long, the HTML file takes too long to be loaded due to network problems.
FCP: After the critical resource is loaded, the rendering pipeline is performed, which is when the first pixel is drawn on the page. If this indicator does not meet the standard, it may be due to the slow loading of resources or the execution efficiency of scripts.
LCP: When the home page content is fully drawn, this time point is also called the LCP. Optimizations, like FCP, are too slow to load.
FI: Optimization is the same as LCP.
TTI: Postponing script work that is not related to generating the page.
MPFID: The calculation task is thrown to the WebWorker to reduce the main thread pressure; Reconstructing the CSS to reduce layers.
DOMContentLoad: This event is triggered after the DOM is generated.
Onload: The onLoad event is triggered after all resources have been loaded.
Many of the optimizations mentioned above are caused by loading. What are the performance indicators of loading?
You can view the specific information of each request in the Network panel, where Timing is the request time:
- Queued: launch a request queuing time, before HTTP2 processes can only six TCP connections at the same time, if it is due to the busy also in the queue state, upgrade to HTTP2 solution.
- 例 句 : But mine has stalled.
- Initial Connection /SSL: Establishes a connection with the server, including the TCP and SSL handshake time.
- Request Send: Indicates the time for sending data after establishing a connection.
- TTFB: the time when the server responds to the first byte. Performance reasons may be slow data generation on the server, network latency, or excessive request header information, such as cookies.
- Content Download: From the first byte until all response data is received. The reason for the slow performance may be the response data is too large, compressed code to comment and other solutions.
How does the front-end perform exception/performance monitoring?
Performance monitoring can be done by viewing Preformance or Audis/Lighthouse to listen to performance as the user is using it in real time.
Exception catching:
- Try-catch: local suspicious code error capture
- Window. onerror: Synchronization errors can be caught, static resource exceptions and JS code errors cannot be caught
- Resource error capture: Appends an onError event to a resource file
- window.addEventListener(‘error’, error=> {… }) : Error capture occurs during the event capture phase
- Unhandledrejection event: Catch a Promise error
- Vue.config.errorHandler = (err, vm, info) => {… } : VUE is abnormal