This is the ninth day of my participation in the August More text Challenge. For details, see: August More Text Challenge

I. Browser security

1. XSS attacks

Cross-site script attacks are carried out through script injection in XSS mode. Attackers can modify the USER’s DOM or obtain the user’s cookie by injecting malicious scripts into the site so that they can run on the user’s web page.

The essence of an XSS attack is that a web page does not filter the input to make it work as normal as normal code. This led to some malicious scripts being run.

An attacker can perform the following operations on a user’s page through an XSS attack:

  1. Obtain page data, cookies, localStorage, DOM, etc
  2. Breaking page structure
  3. Traffic hijacking (link to a website)

Attack type:

  1. Stored: The malicious script is stored on the server and executed after the client returns the malicious code
  2. Reflective: The user is induced to access a hyperlink with malicious code, and the server returns a result set with malicious code for execution
  3. DOM type: XSS formed by modifying the DOM nodes of the page

Therefore, we can draw a conclusion that the first two types need to be handled on the server side, and DOM type is the security vulnerability of the front end itself, which needs to be handled on the front end.

Defensive measures:

  • Input filtering, which escapes some DOM operations, such as the occurrence of critical code such as tags
  • Output filtering, setting the CSP whitelist to tell the browser what code to load, thus enabling the browser to intercept malicious code,
  • Protect sensitive information, such as cookie usehttp-onlyTo make the script unavailable.
  • You can also use captcha to prevent the script from pretending to be the user to perform some operations

2. CSRF attacks

A cross-site request-forgery attack is one in which a user is induced to enter a third-party site of his or her own, which then sends cross-site requests to the attacked site. If the user is in a login state, then the attacker can take advantage of this login state and directly pretend to be a user to operate to the server.

Prevention methods

  • Same-origin check: The server filters the requests based on the refer information in the HTTP request header to determine whether the requests are accessible sites. The problem is that the refer can be forged (insecure)
  • Using token: The server returns a random number token to the user. When the website initiates a request again, the token returned by the server is added to the request parameters, and then the server authenticates the token.

3. What front-end security problems may arise

  • XSS script injection attack
  • The abuse of the iframe
  • CSRF cross-site request forgery
  • Malicious third-party libraries

Ii. Browser caching mechanism

1. Understanding the cache

The browser cache is mainly aimed at the static resources in the front end. After the request is made, the resources are pulled to the server and stored in local memory or disk. The next time the same request is sent, the resource can be fetched directly locally. If the resource on the server has been updated we request the resource again and save it locally. This has the advantage of greatly reducing the number of requests and improving the performance of the site.

Using the browser’s cache has the following advantages:

  1. Faster loading of resources on the client
  2. Reduces server pressure
  3. Reduce multiple network traffic

2. Classification of the browser cache

Strong cache

If the cache resource is valid, the data is fetched directly from the local cache without making a request.

A strong caching policy can be set in two ways: the Expires and cache-control attributes in the HTTP response header

It’s important to note that Expires is an Http1.0 attribute that specifies a strongly cached expiry date, sometimes at odds with the client time. So Http1.1 introduces the cache-Control property, which is a validity period, like a shelf life, that allows for more precise control over the client cache.

In general, only one of them should be set. Cache-control has a higher priority than Expires if both are present.

Negotiate the cache

The request is sent to the server, and the server determines whether the negotiation cache is matched based on the resources in the request header. If a match is made, the server returns a status code of 304 to inform the browser to read the resource from the cache

There are also two common ways to determine whether a cache is being negotiated

Last-modified as returned by the server and if-modified-since as requested by the client

The server indicates when the resource was Last Modified by adding a last-modified property to the response header. When the browser next makes a request, it adds an if-modified-since property to the request header. The value of the property is the last-modified value of the Last time the resource was returned. When the request is sent to the server, the server uses this property to compare the last time the resource was modified to determine whether the resource has been modified. If the resource is not modified, the 304 state is returned and the client uses the local cache. If the resource has been modified, the modified resource is returned.

Disadvantages: The values of these two properties are time values, which can only be accurate to seconds, but this can be a problem if our server time is measured in milliseconds.

So this leads to: Etag and if-none-match attributes

The two attributes are similar to the workflow above, except for the attribute value, which is a unique identifier. There will be no time confusion.

Note: When last-modified and Etag are present at the same time, Etag takes precedence

Conclusion:

Both the strong cache and the negotiated cache take resources from the local cache. The difference is that the negotiated cache sends a request to the server once. They are normally requested when the cache is not hit. In the actual caching mechanism, the strong cache policy and the negotiated cache policy are used together. The browser first determines whether the strong cache is hit based on the request information, and if it is hit, the browser directly uses the resource. If no, the server sends a request to the server based on the header information and uses the negotiation cache. If the negotiation cache matches the request, the server does not return resources and the browser directly uses the copy of the local resource. If the negotiation cache does not match the request, the browser returns the latest resource to the browser.

3. The whole process of the browser’s caching mechanism

  1. When the browser loads the resource for the first time, the server returns 200. The browser downloads the resource file from the server and caches the resource file and response header for comparison in the next loading.
  2. The second time we access it, we cache it strongly, compare our expires value to our current value (or cache-contral if cache-control is set), and if it hits, we read it directly from the cache.
  3. If the resource has expired, the cache is negotiated, sending a request to the server with if-modified-since or if-none-match to determine whether the server resource has changed
  4. After receiving the request, the server preferentially uses the value of eTAG to determine whether the resource has been modified according to the value of if-none-match. If the value is not modified, the negotiation cache is matched and the status code 304 is returned. If not, return the requested resource with the status code 200 (last-modified value if no etag value is available).

Principles of browser rendering

1. Main modules of the browser rendering engine and the rendering process

A rendering engine mainly includes :HTML parser, CSS parser, JS engine, Layout module, drawing module and so on.

HTML parser: It is used to parse HTML documents and organize the ELEMENTS of HTML into DOM trees

CSS parser: Calculates style information for each element object in the DOM

Javascript engine: Use Javascript code to modify the content of a web page, as well as change the style. Javascript engine can interpret Javascript code

Layout module: After the DOM is created, it is necessary to combine the element objects in the DOM with the style information, calculate their size and position and other layout information, and layout the elements

Drawing module: draw each page node after layout calculation into image results

Browser rendering process:

1. First parse the HTML tags, call the HTML parser to parse the corresponding tokens (a token is a serialization of the tag text), and build the DOM tree (a piece of memory to hold the parsed tokens and establish relationships).

2. When the link tag is encountered, the corresponding parser is called to process the CSS tag, and the CSS style tree is built

3. When encountering a script, call the JavaScript engine to process script tags, bind events, modify the DOM tree /CSS tree, etc

4. Merge the DOM tree and CSS tree into one render tree

5. Render against the render tree to calculate the geometry of each node (this process is GPU-dependent)

6. Finally draw each node to the screen

3. Style rendering

First, the styles in the style tag are parsed by the HTML parser. The internal styles in the page style tag are parsed asynchronously. Browsers also load resources asynchronously.

The reason for this phenomenon is that when the HTML parser asynchronously parses the HTML structure and the styles in the style table, the HTML structure is parsed before the styles. In this case, we have the HTML structure but some styles are not displayed, which leads to the phenomenon of the flash screen.

We also want to avoid using the style tag during redevelopment

4. Link style rendering

The styles that link in are parsed by the CSS parser, and the CSS parser blocks the current browser rendering process while parsing the styles, which means it is synchronized.

Since it can block the current page rendering, it can avoid the splash screen phenomenon, because we will only wait for the loading process, we can completely loading it. This can give users a relatively good user experience, so we recommend using.

5. Block rendering

About CSS blocking

Only CSS introduced by link can block. Because it uses a CSS parser for parsing. Styles in style are parsed using an HTML parser

Style in the style tag:

  • Parsed by an HTML parser
  • Does not block browser rendering (hence the “flash” phenomenon, structure before style rendering)
  • Not blocking style parsing (not blocking rendering, of course not blocking parsing)

External CSS styles introduced by Link: the recommended approach

  • Parsed by the CSS parser
  • Block browser rendering (thus preventing the “splash screen” phenomenon)
  • Block the execution of the JS code after the action (JS has the function of action style, avoid collision, so block, if not blocked then we don’t know whether the final style is due to the action in CSS or JS operation)
  • Does not block parsing of the DOM (parsing does not mean rendering)

The core concept of CSS optimization is to speed up external CSS loading as quickly as possible

  • Use CDN nodes to accelerate external resources
  • Compress CSS (using a packaging tool such as WebPack; Gulp, etc.)
  • Reduce the number of HTTP requests and combine multiple CSS
  • Optimize the code for the stylesheet

Blocking with JS

  • Blocking subsequent DOM parsing: The browser doesn’t know what the script is going to do. If we parse the DOM and then my JS does something to the DOM, such as remove some of the DOM, then the browser’s parsing of the following DOM is useless. The browser can’t predict what the script is doing. So let’s just block the parsing of the page
  • Blocking page rendering: similar to the above explanation, or the browser is afraid of doing useless work, js can manipulate the DOM
  • Blocks execution of subsequent JS: up and down dependencies

Remark:

  1. The CSS and HS parses are mutually exclusive. The JS stops parsing when the CSS parses, and the CSS stops parsing when the JS parses
  2. Whatever the CSS block or js blocked, will not block the browser refer to external resources (images, style, script, etc.), because it is a kind of model, the browser loads the document as long as it is design the content of the network request, regardless of whether they are pictures, style, script can to send a request to access to resources, as for the resources to the local browser to coordinate when to use again.
  3. Browser pre-parsing optimization: the current mainstream browsers have this function, that is, when executing JS code, the browser will open a thread to quickly parse the rest of the document, if the subsequent operation has a network request, then send a request; If the rest of the code does not operate on the DOM, open blocking and let the browser parse the DOM (this is the opposite of js blocking, but is an optimization for modern browsers).

Iv. Browser local storage

1. SessionStorage, LocalStorage and Cookie

Cookie

Because Http is connectionless, the server cannot determine whether two requests in the network were sent by a single user. To solve this problem, cookies are proposed. A Cookie is only 4KB in size and is a plain text file that is carried with each HTTP request

Usage scenario:

  • In conjunction with sessions, we store the sessionID generated by the server in a Cookie. Each request is carried with this SessionID so that the server knows who made the request and can respond accordingly.
  • Can be used to count the number of clicks on the page

LocalStorage

WebStorage, which stores 5mb of content, is persistent and will not be deleted when the browser is closed.

Features:

  • If the browser is set to private mode, it cannot be read
  • LocalStorage is displayed by the same origin policy. If it is not in the same domain, it will not be accessed
  • Large size and persistence

The API is similar to sessionStorage

Usage scenario:

  1. Music site visitor mode playlist
  2. Search box visit record in tourist mode
  3. Cross-tab communication

SessionStorage

SessionStorage is mainly used to temporarily save the data of the same window (or TAB). The data will not be deleted when the page is refreshed, but will be deleted after the window or TAB is closed.

Comparison of domain LocalStorage:

  • Both are local storage, and the storage size is similar
  • SessionStorage also has the same origin policy. However, SessionStorage has a stricter restriction. SessionStorage can be shared only in the same browser and the same window.
  • LocalStorage and SessionStorage cannot be crawled by crawlers.

Usage scenario:

  • Information temporarily stored on the site when a visitor logs in

2. Cross-tab communication

We use LocalStorage to achieve cross-page communication, cross-page communication before the use of the scenario is in our a page data changes, in the case of not refreshing the page of our B page response data can also be changed. Like our takeout.

We operate the input box in the A page and save it to LocalStorage when out of focus.

  let a= document.getElementById('tt')
  a.onblur=() = > {
    localStorage.setItem('ha',a.value)
  }
Copy the code

On the b page, listen for the storage event and set the value

 let a=document.getElementById('tt')
  window.addEventListener('storage'.(e) = > {
    a.value=e.newValue
  })  
Copy the code

The same origin policy of the browser

1. What is the same Origin policy

The same origin policy is a security mechanism for browsers that restricts the interaction of documents from one source to another.

Same-origin indicates that the protocol, port number, and domain name must be the same.

We can have a look at the following chestnuts: store.company.com/dir/page.ht…

URL Whether the cross-domain why
Store.company.com/dir/page.ht… homologous Exactly the same
Store.company.com/dir/inner/a… homologous Only the path is different
store.company.com/secure.html Cross domain Agreement is different
Store.company.com: 81 / dir/etc. HTM… Cross domain Different ports (http://default port is 80)
News.company.com/dir/other.h… Cross domain The host different

As long as the same origin policy restricts three aspects:

  1. Limit the local storage of different domains, such as cookie, WebStorage, IndexDB
  2. The JS in the current domain cannot access the DOM in different domains
  3. Cross-domain requests cannot be sent using Ajax in the current domain

The same origin policy is actually a restriction on JS scripts, not browsers, and there is no cross-domain restriction on general IMG or other resource requests.

2. How do I solve the cross-domain problem

There are seven ways to solve cross-domain problems, which are jSONP, CORS, Proxy, PostMessage, socket. IO, iframe, and nginx reverse Proxy.

Among them, the most commonly used are the first three: JSONP, CORS, Proxy cross-domain

JSONP

As we said earlier, the same origin policy is intended to restrict some operations of js scripts, while other get requests to external resources such as SRC tags in documents such as img and Script tags are not affected by cross-domain issues. So we can use the callback function to interact with the server.

The server sends a GET request with a callback parameter. The server cobbles the data returned by the interface into the callback function and returns it to the browser. The browser parses and executes the request, and the front end gets the data returned by the callback function.

Here’s an example:

server.js

const http = require('http');
const urly = require('url');
var obj={
    name:'jam'.age:'12'
}
http.createServer((req,res) = > {
    var parmer=urly.parse(req.url,true)

    console.log(parmer);
    Var parmer.query.callback; var parmer.query.callback; var parmer.query.callback; var parmer.query.callback;
    if (parmer.query.callback) {
        var str=parmer.query.callback+'('+JSON.stringify(obj)+') '
        res.write(str);
    }else{
        res.write(JSON.stringify(obj));
    }
    res.end();
}).listen(8000.function () { 
    console.log('Server on');
 });
Copy the code

client.js

<! DOCTYPEhtml>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="Width = device - width, initial - scale = 1.0">
    <title>Document</title>
</head>
<script>
    // The callback function to execute
    function hello(data) {
        console.log(data);/ / print obj
    }
</script>// Use the script tag to make the get request, followed by a callback function to be passed to the server<script src="Http://127.0.0.1:8000/? callback=hello"></script>
<body>
</body>
</html>
Copy the code

Disadvantages of JSONP:

  1. Only GET requests can be made
  2. It is not secure and may be subjected to XSS attacks

CORS

Cross-domain resource sharing

It allows browsers to make XMLHttpRequest requests to cross-source servers, overcoming the limitation that AJAX can only be used from the same source. CORS requires both browser and server support. Currently, all browsers support this function. The value of Internet Explorer cannot be lower than IE10. The entire CORS communication process is completed automatically by the browser without user participation. For developers, CORS communication is no different from the same source AJAX communication; the code is exactly the same. Once the browser realizes that the AJAX request is cross-source, it automatically adds some additional header information and sometimes an additional request, but the user doesn’t notice. Therefore, the key to CORS communication is the server. As long as the server implements the CORS interface, it can communicate across sources.

CORS requires both browser and server support. The entire CORS process is completed by the browser without user participation. Therefore, the key to realize CORS is the server. As long as the server implements CORS requests, it can communicate across sources.

We can generally set request headers to specify which sources we allow access to and how requests are made

res.header('Access-Control-Allow-Origin'.The '*')
res.header('Access-Control-Allow-Headers'.'Authorization,X-API-KEY, Origin, X-Requested-With, Content-Type, Accept, Access-Control-Request-Method' )
res.header('Access-Control-Allow-Methods'.'GET, POST, OPTIONS, PATCH, PUT, DELETE')
Copy the code

The important thing to note here is that there are two possible cases when setting the Origin field: * and a domain name

For security reasons, browsers do not send Cookie values when using *(which accepts all domain names). Using a domain name is fine. But in the actual development, we usually only need to use a domain name to complete the business.

The Proxy agent

Data is forwarded through a proxy server

The idea is to create a virtual server locally, send the requested data, and receive the requested data.

Using the interaction between servers, there will be no cross-domain problems, and it is also a completely independent way to solve the cross-domain by the front end

Configuration under vueconfig:

module.exports={
  devServer: {host:'localhost'.port:8080.proxy: {'/api': {target:'http://mall-pre.springboot.cn'.changeOrigin:true.pathRewrite: {'/api':' '
        }
      }  
    }
  }
}
Copy the code

React can configure one-way cross-domain proxies in package.json

	"proxy": {"/api": {"target":"http://m.kugo.com"."changeOrigin": true
 ,
		"pathRewrite": {
      			'^/api': ' ',}}}Copy the code

Nginx agents cross domains

Nginx proxy cross-domain, essence and CORS cross-domain principle, through the configuration file to set the request response header access-control-allow-origin… And other fields.

The event mechanism of the browser

1. What is the event? What is the event model?

An event is an interactive action that occurs when you operate a web page, such as move/click.

Modern browsers typically have three event models:

Dom0-level event model

This model does not propagate, so there is no concept of event flow. Registering event names directly on DOM objects is DOM0 writing.

IE Event Model

In this event model, an event consists of two processes, event processing phase and event bubbling phase. This model adds listening functions through attachEvent, and you can add multiple listening functions that are executed in sequence.

DOM2 level event model

In this event model, an event has three processes, the first of which is the event capture phase. Capture refers to the event propagating from the Document all the way down to the target element, checking in turn whether the passing node is bound to the event listener function, and executing if so. The latter two phases are the same as the two phases of the IE event model. In this event model, the function bound to the event is addEventListener, where the third parameter specifies whether the event is executed during the capture phase.

2. How do I stop events from bubbling

Event.stoppropagation ()

CancelBubble = true;

3. Understanding of event delegation mechanism

Trust mechanism is actually using the level DOM2 events event bubbling mechanism, child elements will manage events entrusted to the parent element, because the event bubbling again to the parent node in the process of the meeting, the parent element can get to the destination node through the event object, so they can put children elements listening in on the parent element, has a parent unified handling the operation of child elements.

Using event delegates reduces memory consumption by eliminating the need to bind a listener event to each child element. For example, if you add a new child node, you don’t need to add a listener event to it. The listener function in the parent element will handle the event binding.

Features:

  • Reduced memory consumption, no need to add a large number of listener events
  • Dynamic binding event

Disadvantages:

  • Bubble performance is affected when the DOM hierarchy is too deep
  • There are some limitations, such as events like Mousemove and Mouseout, which have event bubbles, but only constantly calculate location by location, which is very performance expensive

Optimization scheme:

  • Use event delegates only where necessary
  • Reduce the level of binding and remove the binding from the body element
  • Reduce the number of bindings and, if possible, combine the bindings of multiple events into a single event delegate, which will be distributed by the callback of the event delegate.

Use case to perform unified operations on the LI tag under the UI tag.

 ul.addEventListener('click'.function (e) {
        var e = e || window.event;
        var target = e.target || e.srcElement;
        target.innerHTML += 'bbb';
        target.style.color = 'yellow';
    }, false);
Copy the code

4. Browser event loop

Because JS is single-threaded, code execution can be ordered by pushing the execution context of different functions onto the execution stack. In the execution of synchronization code, if asynchronous events are encountered, the JS engine will not wait for the completion of time-consuming operations to execute the following synchronization code, but suspend these time-consuming operations and continue to execute other tasks in the execution stack. After the asynchronous event is executed, the callback corresponding to the asynchronous event is added to a task queue for execution. Task queue is divided into macro task queue and microtask queue. When the code in the current execution stack is executed, the JS engine will judge that there are tasks in the microtask queue that can be executed, and if there are, the event at the head of the microtask queue will be pushed into the stack for execution. Tasks in the macro task queue can be executed after the execution of the microtask queue is complete. (Note that there can be no tasks in the microtask queue when executing macro tasks)

The execution sequence of the event loop is as follows:

  • The synchronization code is executed first, which is a macro task

  • When all synchronous code has been executed, the execution stack is empty and queries whether any asynchronous code needs to be executed

  • Perform all microtasks

  • When all microtasks are performed, the page is rendered if necessary

  • The next Event Loop is started, executing the asynchronous code in the macro task

5. What are macro tasks and microtasks?

** Microtasks: then callbacks in promise, process. NextTick in node, etc

** Macro tasks: ** Script the execution of scripts, setTimeout, setInterval, setImmediate, and other timed events

6. The difference between Node event loops and browser event loops

Node’s event loop is divided into six phases, which are executed repeatedly in sequence. Whenever a phase is entered, the function is retrieved from the corresponding callback queue and executed. When the queue is empty or the number of callbacks executed reaches a threshold set by the system, the next stage is entered.

  • Timers: This phase executes the callback functions that have been set by setTimeout() and setInterval().
  • Pending Callvack: Callback for certain system operations, such as TCP connection receiving ECONNREFUSED.
  • Idle, prepare: Used only in the system.
  • Poll: Retrieve new I/O events. Perform I/ O-related callbacks;
  • detection:setImmediate()The callback function is executed here
  • Closed callback function: Some closed callback functions, such as:socket.on('close', ...)

We find that Node’s event loop is more complex in terms of the Tick of an event loop, which is also divided into microtasks and macro tasks:

  • Macrotask: setTimeout, setInterval, IO event, setImmediate, close event;
  • Microtask: Promise then callback, process. NextTick, queueMicrotask;

However, event loops in Node are more than just microtask queues and macro task queues:

  • Microtask queue:
    • Next tick Queue: process. NextTick;
    • Other queue: Promise then callback, queueMicrotask;
  • Macro task queue:
    • Timer queue: setTimeout, setInterval;
    • Poll Queue: IO event.
    • Check queue: setImmediate;
    • Close queue: Close event;

So, in the tick of each event loop, the code is executed in the following order:

  • next tick microtask queue
  • other microtask queue
  • timer queue
  • poll queue
  • check queue
  • close queue

Related interview questions:

async function async1() {
  console.log('async1 start')
  await async2()
  console.log('async1 end')}async function async2() {
  console.log('async2')}console.log('script start')

setTimeout(function () {
  console.log('setTimeout0')},0)

setTimeout(function () {
  console.log('setTimeout2')},300)

setImmediate(() = > console.log('setImmediate'));

process.nextTick(() = > console.log('nextTick1'));

async1();

process.nextTick(() = > console.log('nextTick2'));

new Promise(function (resolve) {
  console.log('promise1')
  resolve();
  console.log('promise2')
}).then(function () {
  console.log('promise3')})console.log('script end')

Copy the code

Output results:

script start
async1 start
async2
promise1
promise2
script end
nextTick1
nextTick2
async1 end
promise3
setTimeout0
setImmediate
setTimeout2
Copy the code

Vii. Browser garbage collection mechanism

1. V8 garbage collection mechanism

V8 implements an accurate GC, and V8 divides the heap into a new generation and an old generation.

Cenozoic algorithm

Scavenge GC algorithm is used to avenge objects in the Cenozoic era.

In the new generation of algorithm space, the general memory space is divided into two parts, one is form, the other is to. In these two Spaces, one must be in use and the other must be free. The newly allocated space is allocated to the From space, and when the from space is full, the new generation space is executed. The algorithm copies the living objects into the To space and clears the deactivated ones. After the replication is complete, the current To space is swapped with the FROM space, so that a round of GC is completed

Old generation algorithm

Objects in the new generation will be transferred to the memory area of the old generation after they survive for a period of time. Compared with the memory area of the new generation, the garbage collection frequency of this memory area is lower. The old generation is divided into the old generation pointer area and the old generation data area. The former contains most objects that may have Pointers to other objects, and the latter holds only the original data objects, which have no Pointers to other objects.

Since the old generation manages a large number of living objects, it will obviously waste a lot of space if the new generation copy and exchange algorithm is still used, so the new algorithm, which is often called tag clearing and tag sorting, is adopted

In the early days of IE, the GC algorithm used was reference counting. The principle of this algorithm is to see whether the object has other references to it. If there is no reference to the object, the object is collected as a garbage collector. But this algorithm becomes problematic when we encounter circular references. Therefore, none of the major browsers currently use this method for GC

Mark clear

Mark clearing is divided into two stages: mark and clear

During the marking phase, all objects in the heap are traversed, and then the living objects are marked, and the dead objects are cleared during the cleaning phase. The marker clearing algorithm mainly determines whether an object can be accessed, so as to know whether the object can be reclaimed.

Approximate steps:

  1. The garbage collector also gives GC roots to find the variables that can be accessed from the root node. In JavaScript, for example, the Windows global object can be viewed as a root node
  2. The garbage collector then traverses all accessible child nodes from the root node and marks them as active. Places that the root node cannot reach are inactive and will be treated as garbage.
  3. Finally, the garbage collector will free all inactive memory blocks and return them to the operating system.

Defect:

After experienced a mark clear memory space may appear discontinuous state, because we clean up the memory address of the object may not be continuous, so the fragments of memory will appear problem, cause behind if a large object you need to allocate and free memory to allocate, will trigger garbage collection in advance, and the garbage collection is not necessary, Because we do have a lot of free memory, it’s just discontinuous.

Means we probably you and your girlfriend to go to cinema, but the seats without even number so you can’t sit together, so when you decide tomorrow when no one see (GC), but we don’t have to wait tomorrow, just need to make the seat compress once you can sit together. This leads to the concept of tag collation

Tag to sort out

After the dead objects are cleared during the reclamation process, the active objects are moved to one end of the heap memory during the process of sorting, and all the memory outside the boundary is cleared after the movement is complete.

The Cenozoic generation turns to the old generation

When an object survives multiple iterations in the next generation, it is considered a lifetime object and is transferred directly to the old generation the next time it is garbage collected. This phenomenon is called object promotion

There are two conditions for an object to be promoted:

  1. An object is subjected to a scavenge algorithm
  2. The memory ratio of To space has exceeded 25%

2. How to avoid memory leaks

  1. Create as few global variables as possible

  2. Manual clearing timer

  3. Use less closures

  4. Clearing A DOM reference