HTTP protocol as an important front-end development of basic knowledge, but many non-computer professional background of code farmers soft. The author systematically studied the basic content of THE HTTP protocol through the book Illustrated HTTP, and tried to describe the key content of the HTTP protocol in a more concise language, so as to help students who do not like reading to save time.

Small class the teacher

  • Introduction to HTTP
  • The HTTP version
  • The HTTP message
  • The Web server
  • HTTPS
  • Web Security Precautions
  • summary

Introduction to HTTP

The article “Illustrated HTTP” describes HTTP’s role in the network as follows:

The Web uses a Protocol called HyperText Transfer Protocol (HTTP) as the specification to complete a series of operation processes from the client to the server. And agreement refers to the stipulation of rules. The Web is built to communicate over the HTTP protocol, so to speak.

The HTTP protocol defines how a Web client requests a Web page from a Web server and how the server passes the Web page to the client. The HTTP protocol adopts the request/response model. The client sends a request packet to the server. The request packet contains the request method, URL, protocol version, request header, and request data. The server responds with a status line containing the protocol version, success or error code, server information, response header, and response data.

HTTP is a stateless protocol. The protocol itself does not save the state of the communication between the request or response. Therefore, the two parties cannot know the identity and status of the other. This is also an important reason for Cookie technology: client-side state management. The browser automatically keeps cookies according to the set-cookie header information in the response packet sent from the server. Each time a client sends an HTTP request, it carries a Cookie in the request packet, which serves as an identifier for the server to identify the status of the client.

TCP/IP protocol family

To better understand the HTTP protocol, we must first look at the TCP/IP protocol family. HTTP is a subset of the TCP/IP protocol family, which is the most basic protocol on the Internet. The TCP/IP protocol family is divided into the following four layers (network basics, it is best to remember) :

  • The application layer

The application layer defines the communication protocol for providing application services to users, such as:

The TCP/IP protocol family prestores various common application service protocols. For example, File Transfer Protocol (FTP), Domain Name System (DNS), and HTTP.

The DNS resolves domain names (for example, www.baidu.com) and IP addresses (for example, 119.75.217.109).

  • The transport layer

The transport layer connects to the upper-layer application layer and provides a protocol for data transmission between two computers in a network connection.

There are two different protocols at the transport layer: Transmission Control Protocol (TCP) and User Data Protocol (UDP).

The TCP protocol is full-duplex, that is, sending and receiving data are synchronized, just like we make a phone call, we can speak and hear at the same time. TCP, which uses three handshakes and four waves to establish and disconnect connections, is more reliable during transmission but less efficient than UDP.

UDP is connectionless, meaning that a connection does not need to be established before data can be formally transmitted. UDP does not guarantee orderly and unlost transmission to the peer, which means it is not stable, but it is also more efficient and lightweight than TCP.

  • The network layer

The network layer specifies the route through which the data is transmitted to the other party’s computer (IP protocol, etc.).

The network layer is used to select a transmission route among a number of options when transmitting between multiple computers or network devices. The road map that provides with ctrip goes home function is same.

  • The link layer

The hardware that handles the connection to the Network, including the device drivers that control the operating system, the hardware, the NIC (Network Interface Card), and the physically visible parts such as optical fibers (and all transmission media such as connectors). Hardware categories are within the scope of the link layer.

The traffic flow for a typical Web application looks like this:

When the sender transmits data between layers, each layer is marked with the header information of the layer. On the contrary, when the receiver transmits data between layers, it will remove the corresponding header information after each layer.

Introduction to serial connections, persistent connections, piped persistent connections, and HTTP /2.0 multiplexing

  • Serial connection: The connectionless nature of HTTP, which means that only one request can be processed per connection and the connection is disconnected immediately upon receiving a response. In HTTP/1.0 (known as serial connection or short connection, short polling), the TCP connection is disconnected after each HTTP communication, so a new connection needs to be established for each new HTTP request. But with dozens of HTTP requests on the web, it’s easy to reach the browser’s request limit, and each request creates a new TCP connection (three handshakes and four goodbyes each time), which greatly increases the communication overhead.
  • Persistent connections: To solve this problem, persistent connections (also known as long connections, long polling) are proposed. Within a certain period of time, as long as neither end of HTTP requests under the same domain name requests to disconnect, the TCP connection status is maintained, and other requests can reuse the connection channel. HTTP/1.1 implements and defaults all connections to be persistent, which reduces the network resources and communication time wasted by TCP handshakes when clients make multiple HTTP requests. However, persistent connections are in blocking mode. The next request cannot be initiated until the last response is returned. If the last response has not been returned, the next request will have to wait (often referred to as a thread blocking).
  • Piped persistent connections: Piped allows you to send the next request and return the response in sequence without waiting for the response to return. Modern browsers do not piped by default. (I won’t say much about the information I have collected in this respect.)
  • HTTP/2.0 multiplexing: Each HTTP request has a sequence identifier, so the browser can make multiple requests concurrently, and the server receives the data and reorders it into different request packets based on the sequence identifier without causing data confusion (see this article for details). Similarly, the server can concurrently return multiple responses to the browser, which then reorder the response packets according to the sequence identifier and group them into their respective requests. In addition, all requests under the same domain name reuse the same TCP connection, greatly increasing the maximum concurrency of the server.
  • WebSocket: WebSocket is a full-duplex protocol proposed by HTML5 for the communication between the client and the server. The client initiates a request. After the connection is established, the client can actively send requests to the server and the server can actively push information to the client.

Look at the picture to distinguish three kinds of links:

In figure (a) : A serial connection must establish a new TCP connection each time it initiates a request.

(b) : Persistent connection Multiple HTTP requests can reuse the same TCP connection, but the next request must be made after the last response returned.

In figure (c) : Pipelined persistent connections can also reuse the same TCP connection and can be made without waiting for multiple HTTP requests, but the responses must be returned in order.

URI

HTTP uses URIs to locate resources on the Internet. Concept:

  • Universal Resource Identifier (URI)
  • Universal Resource Locator (URL)
  • URN(Universal Resource Name).

Personally, A URI is a general term for different representations of a resource file. For example, a file a.html can be represented either by the file name A.HTML, the file path www.baidu.com/a.html, or even by an identifier such as URN :a:1535-3613. Their relationship is as follows:

The HTTP version

For more details on the differences in the HTTP version, please refer to this article

HTTP / 1.0

The earliest HTTP was used for simple web pages and network requests, so it was simple. Each request opened a new TCP link and the connection was disconnected immediately after the response was received.

HTTP / 1.1

  • HTTP/1.1 introduces more cache control policies, such as Entity Tag, if-unmodified-since, if-match, if-none-match, and so on
  • HTTP/1.1 allows range requests, which are added to the request headerRangeThe head
  • HTTP/1.1 request messages and response messages must be includedHostHeader to distinguish the domain names of different virtual hosts within the same physical host
  • Persistent connections are enabled by default in HTTP/1.1. Multiple HTTP requests and responses can be transmitted over a TCP connection, reducing the cost and latency of establishing and closing connections.

HTTP / 2.0

There are two very important concepts in HTTP/2, namely frame and stream, and understanding these two concepts is a prerequisite for understanding the following multiplexing. A frame represents the smallest unit of data transmission. Each frame has a sequence identifier that indicates which stream it belongs to. A stream is a data stream consisting of multiple frames, each of which represents a request. Here is a Chrome extension that makes it easy to view the version of the HTTP request for the current site (after installation, right-click Procotol in the Name/Size/Time header in Chrome Development tools -Network to view the protocol version).

  • New binary format: HTTP/1.x parsing is text-based. There are natural defects in text-based protocol parsing. There are various forms of expression of text, so there must be many scenarios to consider comprehensively. Binary, on the other hand, recognizes only combinations of zeros and ones. Protocol resolution based on this consideration of HTTP/2.0 is in binary format, which is convenient and powerful.
  • Multiplexing: HTTP/2.0 supports multiplexing, which is an upgrade over HTTP/1.1 persistent connections. Multiplexing means that multiple streams can exist in a TCP connection, that is, multiple requests can be sent. The server can know which stream (that is, the request) the frame belongs to by the identifier in the frame, and restore the request by reordering. Multiplexing allows multiple requests to be initiated concurrently, and each request and the response to that request do not have to wait for other requests or responses, avoiding the problem of wire blocking. In this way, a serious request task will not affect the normal execution of other connections, and greatly improve the transmission performance.
  • Header compression: HTTP/1.x request and response headers contain a large amount of information, and each request must be sent repeatedly. HTTP/2.0 uses encoder to reduce the size of the header that needs to be transmitted. Both sides of the communication cache a header fields table, which not only avoid the transmission of repeated headers, but also reduce the size of the transmission.
  • Server push: The server push refers to sending the CSS/JS/IMG resources required by the client to the client along with the index.html, eliminating the need for the client to repeat the request (from the cache).

HTTP / 3.0

HTTP/2.0 uses multiplexing; generally only one TCP connection is required for the same domain name. But when a packet is lost in the connection, the entire TCP is waiting for retransmission, and all subsequent data is blocked. In contrast, with HTTP/1.0, multiple TCP connections can be opened. If a packet loss occurs, only one of the connections will be affected, and the rest of the TCP connections will be able to transmit data. The reason packet blocking occurs is because of the underlying TCP protocol, but it is impractical to change the TCP protocol, just like typeof null === ‘object’, which causes more problems. Since you cannot be modified, make another agreement to replace you. Google launched a QUIC protocol based on UDP and used it over HTTP/3.

QUIC is based on UDP, but UDP itself has many problems such as instability, so QUIC adds many functions on the basis of UDP, such as multiplexing, 0-RTT, using TLS1.3 encryption, flow control, ordered delivery, retransmission and so on. Many advantages, refer to here:

  • Preventing packet blocking: When packets of multiple streams are transmitted over a TCP connection, if the transmission of packets in one stream fails, TCP can continue to transmit packets of other streams only after the packet is retransmitted. However, in THE UDP based QUIC protocol, the data transmission between different streams is truly independent of each other. When the packets of a certain stream need to be retransmitted, they will not affect the data transmission of other streams.
  • Quick restart session: A common TCP-based connection is established based on the IP addresses, ports, and protocols of both ends. In network switching scenarios, such as when the phone switches over to the wireless network and uses 4G, it changes its IP, which causes the TCP connection to have to be recreated. The QUIC protocol uses a unique UUID to mark each connection. When the network environment changes, as long as the UUID stays the same, data transmission can continue without shaking hands.

The HTTP message

The information used for HTTP protocol interaction is called HTTP packets. The HTTP packet on the client is a request packet, and the HTTP packet on the server is a response packet.

A request packet consists of a request line (request method and protocol version), a request header (request URI and client information), and a content entity (user information and resource information, which can be empty).

The response packet consists of the status line (protocol version and status code), the response header (server name and resource identifier), and the content entity (resource information returned by the server).

Request method

  • GET: The GET method is used to obtain server resources
  • POST: The POST method is generally used to transport the entity body
  • PUT: The PUT method is used to transfer files
  • DELETE: The DELETE method deletes files
  • HEAD: The HEAD method is used to obtain the packet header without returning the packet body
  • OPTIONS: The OPTIONS method is used to ask for the method supported by the request URI resource

The concept is very simple very insightful, not quite understand the application scenario of the own Baidu ~~

Status code

The HTTP status code indicates the return result of the HTTP request from the client, identifies whether the server processes the request properly, and indicates the error of the request.

2XX Success (this series indicates that the request was processed properly)
200 OK, indicating that the request from the client was processed correctly on the server
204 No content: indicates that the request is successful, but the response message does not contain the body of the entity
206 Partial Content, the scope request succeeded
3XX Redirection (indicating that the browser is performing special processing)
301 Moved, permanently redirected, indicating that the resource has been assigned a new URL
302 Found, temporary redirection, indicating that the resource has been temporarily assigned a new URL
303 See other indicates that the resource has another URL and should be obtained using the GET method. (For 301/302/303 responses, almost all browsers delete the message body and automatically re-request with GET.)
304 Not modified, a condition where the server allows access to the resource but the request does not meet the condition (unrelated to redirection)
307 Temporary redirect is similar to 302, but the client is expected to keep the request method unchanged and send requests to new addresses
4XX Client error
400 Bad Request: The request packet has a syntax error
401 Unauthorized: requests are sent with HTTP authentication information
403 Forbidden: indicates that the server denies access to the requested resource. You can return the cause description in the entity body
404 Not found: indicates that the requested resource was not found on the server
5XX Server error
500 Internal sever Error: indicates that an error occurred while the server was executing the request
501 Not Implemented: The server does Not support the functionality required by the current request
503 Service Unavailable indicates that the server is temporarily under load or is down for maintenance and cannot process requests

The first field

Here are the field names and functions in the request header and response header:

Gm’s first Action (Possible for both request and response packets)
Cache-Control To control the behavior of the cache:no-cache(Force the server to verify again),no-store(Do not do any caching),max-age=111111(Maximum cache time of resources in seconds),public(Both client and proxy servers can use the cache),private(Proxy server is not available for caching)
Connection The type of connection the browser wants to use preferentially:keep-alive close(Enabling and disabling persistent connections)
Date Packet creation time
Pragma This parameter is used only for request packets. The client requires the intermediate server not to return cached resources
Via Proxy server information, which is added after each proxy server, is separated by commas
Transfer-Encoding Transmission coding mode:chunkedBlock transmission
Upgrade The upgrade protocol required for the client must be the sameConnection: UpgradeUsed together:websocket
Warning Warnings about caching related issues
The request first Function (only for request packets)
Accept Media types that can be correctly received:application/json text/plain
Accept-Charset Correctly received character sets:unicode-1-1
Accept-Encoding List of correctly received encodings:gzip deflate
Accept-Language List of correctly received languages:zh-cn,zh; 1 = 0.9, en, 1 = 0.8
Authorization Client authentication information:Bearer dSdSdFFlsfdjasd123, generally save the token
Cookie Cookie information sent to the server
Expect The specified behavior is expected from the server
From Email address of the requester
Host The domain name of the server is used to distinguish virtual hosts with multiple domain names on a single server. It is the only field that HTTP/1.1 must contain.
If-Match The server will accept the request only if the judgment condition is true:If-Mach: "123456, and the server file tag
If-Modified-Since Local resource not modified return 304 (compare time)
If-None-Match Local resource not modified Return 304 (comparison mark)
User-Agent Client Information
Max-Forwards Limits the number of times that can be forwarded by the proxy and gateway
Proxy-Authorization Send authentication information to the proxy server
Range Request a part of something, cooperateIf-Rangeuse
Referer The original URI of the request originating page
TE Transmission encoding
In response to the first Function (only for response packets)
Accept-Ranges Tells the client whether the server can accept scope requests. Yesbytes, nonone
Age The time the resource has been in the proxy cache
ETag Resource identifier. The identifier changes when the resource changes
Location The client redirected to a URL
Proxy-Authenticate Send authentication information to the proxy server
Server Server name:Apache Nginx
WWW-Authenticate Authentication scheme required for obtaining resources
Set-Cookie Client information is required to identify users
Entity first Function (Supplementary information about request or response packets)
Allow The correct way to request resources:GET HEAD POST
Content-Encoding Content encoding format:gzip deflate
Content-Language Content language:zh-CN
Content-Length Request body length (body size) :
Content-Location Returns the alternate address of the data
Content-MD5 MD5 validation value of the content in Base64 encryption format
Content-Range The content scope of the response body
Content-Type The media type of the content (e.g. ‘Application /json; Charset =UTF-8’ will send precheck request)
Expires Expiration time of the content
Last_modified The last time the content was modified

The first content is more, the key memory browser commonly used some fields on the line:

Two kinds of requests

When browsers send CORS requests (cross-domain requests), they divide the requests into simple requests and complex requests.

In our daily work, common simple requests can be categorized as follows:

  1. The request method can only be HEAD, GET, or POST
  2. No user-defined request header
  3. Content-TypeIt can only be of these kinds:

text/plain multipart/form-data application/x-www-form-urlencoded

Complex requests:

  1. Ajax requests for the PUT, Delete methods
  2. Sending AN Ajax request in JSON format (such as post data)
  3. Ajax requests with custom headers

If it is a simple request, it is executed first and judged later. The execution process is roughly as follows:

The browser detects that the request is a CORS request and adds an Origin field (which contains the page source information) : Protocol, domain name, port) ====> Server receives the corresponding processing (compared with the origin, server to determine whether the source accepted) returns the result to the browser ====> browser to check whether the response header allows cross-domain information ====> allow, then as nothing happened. No, the browser throws an error message.

Complex request When a request occurs, if it is a CORS request, the browser sends an option request in advance. This behavior of the browser is called a precheck request (note that a precheck request does not occur if it is not a cross-domain request, such as a reverse proxy).

The WEB server

It doesn’t matter whether the server is Apache or Nginx, but what the server does. A server can be used as a source server, can also be used as a transfer server, and can even set up a number of different domain name websites on a server.

Virtual host

The HTTP/1.1 specification allows one HTTP server to host multiple Web sites. The virtual host function allows you to create multiple virtual hosts on a physical server (one IP address), and each host maps to an independent domain name. Therefore, when a user accesses the domain name http://www.laogeng.com/, the DNS resolves the domain name into an IP address and finds the physical server based on the IP address. The HOST field in the header of the request (now you know why HOST is mandatory in HTTP/1.1) is used to identify the corresponding virtual HOST.

Proxy server

The proxy server is the “middleman” between the client and the server, that is, the HTTP request is forwarded to the server through the proxy server, and the server response is returned to the client. The proxy server can be used as a cache server or to hide the user identity (forward proxy) or server identity (reverse proxy) for security.

  • The so-called forward proxy is that in order to obtain the content from the source server, the client sends a request to the proxy server and specifies the target access server. Then, the proxy server transfers the demand to the source server and returns the obtained content to the client. Note that the actual requesting client is hidden in the forward proxy process, that is, the server does not know who the formal requesting client is. (Scientific Internet)

  • The so-called reverse proxy is that the client sends a request to the reverse proxy. The reverse proxy server determines the direction of the request after receiving the request, and then feeds back the result to the client. It should also be noted that in the process of reverse proxy, the information of the internal server is hidden. Users do not need to know which server provides the service, as long as they know who the reverse proxy server is. We can even treat the reverse proxy server as a real server. This form of proxy is often used for load balancing, and Nginx, for example, is an excellent reverse proxy server.

  • Reverse proxies solve cross-domain problems: We often encounter cross-domain problems when using scaffolding tools such as vue-CLI for development on the front end, because the project itself needs a port (such as http://localhost:8080) to start local services. Therefore, cross-domain problems inevitably arise (because the local service port and the server interface address are not the same source). In projects that use WebPack as a build tool, proxyTable agent is often used to achieve cross-domain (specific implementation by baidu). Cross-domain occurs because browsers have same-origin policy restrictions, but servers do not. When our local service (assuming domain name: http://localhost:8080) requests the target server (assuming domain name: http://localhost:8080) http://target.com) instead of directly requesting target.com, we request the local service itself http://localhost:8080 (this is the same origin request, there is no cross-domain). Local agent service to forward the interface to target.com (note this time are the two servers direct communication, rather than the client and server communication, so there is no cross domain), the local service to get to the target server response data through the agent cost to disguise again after service request return values returned to the client.

    The local service initiates a request to the local service in the browser -> forwards the request to the local proxy -> the target server -> responds to the data and then disguises the return value of the local server request through the proxy -> the browser receives the data from the target server

The vue-CLI reverse proxy configuration is as follows:

//vue.config.js.devServer: {
    port: 8080.// Configure the port
    open: true.// Project startup automatically opens the browser
    compress: true.// Start compression
    overlay: { // Set the browser overlay to display both warnings and errors
      warnings: true.errors: true
    },
    // Set the request reverse proxy
    proxy: {
      '/api': { // The matching character of the interface to be proxied
        target: process.env.BASE_URL, // Domain name of the interface
        secure: false.changeOrigin: true}}},...Copy the code

Note that if a reverse proxy is used, the request baseURL must be set to the string ‘/’ during axios configuration, otherwise the proxy will not match ‘/ API ‘and the proxy will fail.

Cache server

Caching server refers to a technology that stores network content that needs frequent access in a server that is closer to the user and faster to access, in order to improve the speed of content access. The cache server is the intermediate server between the browser and the source server. The browser sends HTTP requests to the intermediate server, processes the requests (such as permission authentication, cache matching, etc.), and forwards the requests to the source server.

HTTPS

HTTP itself has no privacy, so the data transmitted over HTTP is essentially streaked over the web in clear text. To solve this problem, various encryption technologies have emerged:

  • Symmetric encryption: unique keykey1It can be used for encryption or decryption. Such encryption requires both parties to have a keykey1If the first transmission is intercepted by a third party, game over.
  • Asymmetric encryption: Public keykey3And a private keykey2Can be used for corresponding encryption and decryption, can be used for public key encryption and private key decryption, can be used for private key encryption and public key decryption. The server generates a pair of keys. One private key is kept on the server, known only to itself, and the other is the public key, which can be freely distributed for anyone to use. The plaintext encrypted by the public key on the client needs to be decrypted using the private key. Asymmetric key The encryption and decryption processes use different keys, encryption and decryption are asymmetric, so it is called asymmetric encryption. Compared with symmetric key encryption, asymmetric encryption does not need to share the key between the client and the server. As long as the private key is not issued to any user, the public key cannot be decrypted even if it is intercepted on the Internet. The stolen public key is useless.
  • Mixed encryption: The server uses the asymmetric private key firstkey2Encryption Key for symmetric encryptionkey1And pass it to the client, which uses the asymmetric encryption public keykey3Decrypt the symmetric encryption keykey1Both parties have the keykey1Begin to utilizekey1Encrypted communication. Disadvantages: The middleman can generate an asymmetric encryption public key to replace the server’s public key and send it to the client, but the client cannot verify the credibility of the public key.
  • SSL: You need to apply for a certificate from a certificate authority (CA). The certificate contains the certificate signature and server public keykey3). When the client initiates an HTTP request, the server sends the certificate to the client. The client authenticates the certificate and decrypts the public key of the serverkey3, encrypts the symmetric encryption key generated by itself with the public keykey1And passed to the server, and finally usekey1Encrypt the call. As for security, because the private key is institutional, you can avoid third-party certificate forgery. And even if the server public key is obtained, the public key cannot be decryptedkey3Encrypted symmetric encryption keykey1.

HTTPS is based on the HTTP protocol and encrypts data, authenticates the identity of the other party, and protects data integrity over SSL or TLS (which can be regarded as SSL3.0). Features are as follows:

  • Content encryption: using mixed encryption technology, the middle can not directly view the plaintext content
  • Authentication: The client accesses its own server through certificate authentication
  • Protect data integrity: prevent transmission of content by the middleman impersonation or tampering

The differences between HTTPS and HTTP are as follows:

  1. For HTTPS, you need to apply for a certificate from a CERTIFICATE authority (CA). Generally, there are few free certificates and you need to pay a fee.
  2. HTTP runs over TCP, and all content transmitted is in plain text. HTTPS runs over SSL/TLS, which runs over TCP, and all content transmitted is encrypted.
  3. HTTP and HTTPS use completely different connections and ports (80 and 443).
  4. HTTP connections are simple and stateless; HTTPS is a network protocol built by HTTP+SSL for encrypted transmission and identity authentication. It can effectively prevent carrier hijacking and solve a big problem of hijacking prevention. It is safer than HTTP.

WEB Security Precautions

XSS attacks

XSS attack is the full name of cross-site scripting attack, using HTML can execute feature, try to inject script into the page attack means. There are two types of XSS attacks, either by modifying the browser URL to cause the script to be injected into the page, or by injecting script code into the database through the input box. The first one will be automatically defended by Chrome (but it is better to do it manually), and the second one will need to be defended manually. We recommend using the whitelist filtering method of ‘XSS’ library:

const xss = require('xss')
let html = xss('

XSS Demo

'
) // ->

XSS Demo

< script> alert("xss"); < /script>
Copy the code

CSRF attacks

CSRF is called cross-site Request forgery in Chinese. If the nugget has a follower interface, the id parameter is the follower ID, as follows:

https://juejin.im?id=5cd0438c6fb9a031ec6d3ab2
Copy the code

All I need to do is write an IMG tag for one of my pages:

<img src="https://juejin.im? id=5cd0438c6fb9a031ec6d3ab2" />Copy the code

So as long as there are already logged in to nugget users open my page, will automatically follow me. Even if the request is changed to POST, it can be automatically followed by using a form submission on the page. CSRF attacks originate from the Web’s implicit authentication mechanism! While the Web’s authentication mechanism can guarantee that a request comes from a user’s browser, it cannot guarantee that the request is sent with the user’s approval. CSRF attacks are usually solved by the server. The defense against CSRF attacks can follow the following rules:

  1. Get requests are not used to modify data
  2. A Cookie is setHTTP Only
  3. Description The interface is disabled from cross-domain operations
  4. The request comes with authentication information, such as a verification code or Token

Click on the hijacked

Clickjacking is a visual trick attack. The attackers embed the website they need to attack through the iframe into their web page, set the iframe to be transparent, and then trick users into doing operations on that page. At this time, users will unknowingly click on the transparent IFrame page (┭┮ 24-hour ┭┮) :

Defense:

Let’s leave it to the back end bosses and use an HTTP response header — X-frame-options. X-frame-options is designed to solve clickjacking. It has three optional values:

  1. DENY: The browser will DENY the current page from loading any frame pages.
  2. SAMEORIGIN: The address of the frame page can only be a page under the same domain name.
  3. Allow-from origin: Specifies the address of the page where frame is allowed to load;

Man-in-the-middle attack

In a man-in-the-middle attack, the attacker establishes a connection with both the server and the client and makes the other side think that the connection is secure, but in fact the entire communication process is controlled by the attacker. The attacker can not only obtain the communication information of the two parties, but also modify the communication information. The essence of man-in-the-middle attack is authentication and trust between client and server.

Symmetric encryption, asymmetric encryption, and mixed encryption are not effective in preventing man-in-the-middle attacks, because the man in the middle can intercept the first transmission of the key and secretly change the day, and the client or server will not know. As the ultimate means to prevent man-in-the-middle attack, HTTPS introduces certificate mechanism to solve the trust problem between client and server, thus effectively preventing man-in-the-middle attack.

summary

The above is the front end of the NEED to master HTTP related knowledge, ability to limit some knowledge may not be very clear, please also read the officials have not clear place to directly criticize and correct.

The front end is a huge subject, although the threshold of entry requirements are not high, but its huge knowledge system let people look at the ocean. Boy as a half-way out of the front small white, planning to follow Daniel’s guidance (thank you) from the beginning to learn the front, every knowledge point will make a summary, deepen their understanding and convenient for others to refer to.

You are advised to read another article about the HTTP protocol HTTP protocol super detailed, for HTTP in more detail.