HTTP overview
Http is a stateless protocol, and the Http protocol itself does not save the communication status between requests and responses. This means that at the HTTP level, the protocol does not persist requests or responses that have been sent.
With HTTP, each time a new request is sent, a new response is sent. The reason for this design is to process a large number of transactions faster and ensure the scalability of the protocol.
However, as the business evolves, there is a need to preserve state. So we introduced cookies, which we’ll talk about in browser storage.
With cookies communicating using HTTP, you can manage state.
HTTP request methods
- HTTP1.0 supports methods: GET, POST, and HEAD methods
- HTTP1.1 supports methods (added): OPTIONS, PUT, DELETE, TRACE, and CONNECT
An overview of all types of requests
GET
: is usually used to request the server to send certain resourcesHEAD
: Headers for the resource requested, and those headers are the same as those returned by the HTTP GET method. One use scenario of this request method is to obtain the size of a large file before downloading it and decide whether to download it or not to save bandwidth resourcesOPTIONS
: Used to get the communication options supported by the destination resourcePOST
: Sends data to the serverPUT
: used to add a resource or replace the representation of the target resource with the payload in the requestDELETE
: Deletes a specified resourcePATCH
: Used to partially modify a resourceCONNECT
: reserved in HTTP/1.1 for proxy servers that can pipe connections,CONNECT
When communicating with the proxy server, a tunnel must be established and TCP communication must be implemented using the tunnel protocol. Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols are used to encrypt communication content and transmit it through network tunnelsTRACE
: a method for the Web server to loop back previous request traffic to the client by tracing the path. Send the request inMax-Forwards
The header field is filled with a number, which is subtracted by one each time it passes through the server. When the number reaches zero, the transmission is stopped and the server that receives the request returns a status code of 200
A note on Trace
The client can Trace a request to see how it was processed and modified. This is because requests that want to connect to the source target server may be routed through a proxy, and the Trace method is used to confirm the list of operations that took place during the connection.
However, Trace methods are not commonly used and are prone to XST(cross-site Tracing) attacks, so they are usually not used.
GET and POST
- GET is harmless when the browser falls back, while POST resubmits the request
- The URL generated by GET can be saved, but not by POST
- GET requests are actively cached by the browser, but POST requests are not
- GET requests can only be url encoded, while POST supports multiple encoding methods
- GET request parameters are retained in their entirety in browser history, while parameters in POST are not
- GET requests transfer parameters in the URL with length limits, whereas POST does not
- GET accepts only ASCLL characters for the data type of the argument, while POST has no restrictions
- GET is less secure than POST because parameters are exposed directly to the URL and therefore cannot be used to pass sensitive information
- The GET argument is passed through the URL, and the POST is placed in the Request body
But essentially, GET and POST are two ways to send requests in the HTTP protocol. The underlying layer of HTTP is TCP/IP, which means that both requests are essentially the same at the protocol level.
We can add request body to GET, and WE can add URL parameters to POST. This is technically possible.
So get and POST are really convention rules. Browsers and servers follow the rules of this convention, and if they do not, the availability of the service is not guaranteed.
In addition, there is another state difference between GET/POST:
GET generates one TCP packet and POST generates two TCP packets.
- For GET requests, the browser will put
http header
anddata
Send data together, and the server responds200
. - For POST, the browser sends first
header
, server response100 continue
, and the browser sends itdata
, server response200 ok
From this point of view, GET requests are more efficient than POST requests, but don’t blindly replace POST with GET. Here are some reasons:
- GET/POST has its own clear meaning and should not be used arbitrarily
- In a good network environment, the time for sending a packet once and the time for sending a packet twice can be ignored, while in a bad network environment, the TCP for sending a packet twice has great advantages in verifying the integrity of the packet.
- Not all browsers send packages twice in POST, FireFox only sends them once
PUT and POST
The PUT method is idempotent while the POST method is non-idempotent.
In addition, generally, the URI of PUT executes a specific single resource, and POST points to a collection of resources.
POST is to create a resource, PUT is to update a resource
Idempotent: a unified request is idempotent if the server returns the same result multiple times.
Difference between PUT and PATCH
PUT and PATCH are update resources. PATCH is used to locally update known resources.
Simply PUT, PUT is to update all information, and PATCH is to update some information.
HTTP Keep-Alive
In the early DAYS of HTTP/1.0, a connection was created for each HTTP request, and the process of creating a connection required resources and time. To reduce resource consumption and shorten response times, you needed to reuse connections. In later HTTP/1.0 and HTTP/1.1, the mechanism of reuse Connection was introduced, that is, Connection: keep-alive was added in the HTTP request header to tell the other party not to close the request after the response is completed, and we will use this request to continue communication next time. The protocol states that HTTP/1.0 requires Connection: keep-alive in the request header if you want to keep a long Connection.
Advantages of Keep-Alive:
- Less CPU and memory usage (due to fewer connections open at the same time)
- Allows HTTP pipelining of requests and replies
- Reduced congestion control (TCP connections reduced)
- Reduced latency for subsequent requests (no handshake required)
- You do not need to close TCP to report an error
Keep-alive is enabled by default in Http1.1, and can be seen in Response headers: Connection: keep-alive.
When this header is configured, you can configure some parameters of the keep-alive header, such as:
Keep-Alive: timeout=5, max=100
Copy the code
Max indicates that the TCP channel can remain open for 5 seconds. Max indicates that the long connection can be disconnected after 100 requests. By default, there is no limit.
There are two important configurations in nginx:
Keepalive_timeout 65 // Keepalive_REQUEST keepalive_request 100 // Maximum connection limitCopy the code
Different from TCP’s Keep Alive
HTTP keep-alive is a little different from TCP Keep Alive in terms of intent. The former only needs to reuse TCP connections to avoid establishing too many TCP connections, while TCP Keep Alive aims to Keep TCP connections Alive and essentially sends heartbeat packets: If an ACK message is received, the TCP connection is considered alive. If no response is received after a certain number of retries, the TCP connection is discarded.
pipelines
Persistent links make it possible for most requests to be sent pipelining. After sending the previous request, you need to wait and receive the response before sending the next request. With the advent of pipelining, different waiting responses can also be sent directly to the next request.
This allows multiple requests to be sent simultaneously and in parallel, rather than waiting for one response after another.
This is essentially a technique for submitting multiple HTTP requests in batches without waiting for a response from the server. The pipeability mechanism must be done through permanent connections, and only GET and HEAD requests can be piped, with POST being limited. Can only be enabled in HTTP1.1.
Browsers that request large numbers of HTTP submissions can shorten page load times. The key is to cram multiple HTTP request messages into a TCP packet at the same time, so that multiple requests can be sent at the same time by submitting one packet, thereby reducing redundant packets on the network and reducing line load.
HTTP request packet
The request packet consists of four parts:
- The request line
- Request header
- A blank line
- Request body
The request line
The request line includes the request method field, URL field, and HTTP protocol version field. They are separated by Spaces. For example, GET /index.html HTTP/1.1
Request header
The request header consists of a pair of keywords and values on each line. The keywords and values are separated by colons (:)
User-Agent
: Type of browser that generated the requestAccept
: a list of content types recognized by the clientHost
: Host name of the request, allowing multiple domain names to use the same nameIP
Address, that is, virtual host.
Request body
Data carried in requests such as POST and PUT.
HTTP response packet
The response packet also consists of four parts:
- Response line: consists of protocol version, status code, and reason phrase for status code. For example,
HTTP / 1.1 200 OK
- Response header: Consists of the response header
- A blank line
- Response body: data that the server responds to
HTTP code
HTTP can directly transmit data as it is, but it can also improve the transmission rate through encoding during transmission. By encoding at the time of transmission, it can efficiently handle a large number of access requests. However, the operation of coding requires the computer to do so, consuming additional CPU resources.
Message body and entity body
- Message: The basic unit of HTTP communication. It consists of 8-bit ocTET sequence (ocTET is 8 bits) and is transmitted through HTTP communication
- Entity: The payload data (supplement) that is transmitted as a request or response consists of the entity header and entity body.
HTTP packet body The body of the entity used to transmit the request or response.
Generally, the message body is equal to the entity body. Only when the encoding operation is carried out during transmission, the content of the entity body changes, causing it to be different from the packet body.
Content encoding
The HTTP protocol has a feature called content encoding that reduces the size of the data.
Content encoding specifies the encoding format to be applied to entity content and keeps entity information compressed as is. The encoded entity is received and decoded by the client.
Commonly used content encoding is as follows:
- gzip(GNU zip)
- Compress (Standard compression for UNIX systems)
- deflate(zlib)
- Identity (no coding)
Block transfer encoding
During Http communication, the browser cannot display the requested page until all the encoded entity resources are transferred. When transferring large amounts of data, the browser can gradually display the page by dividing the data into multiple pieces
This ability to block entity bodies is called Chunked Transfer Coding.
The chunking transfer code divides the entity into several parts (blocks), each of which is marked with the size of the block in hexadecimal notation, and the last piece of the entity body is marked with “0(CR+LF)”.
The entity body that uses the chunking transfer encoding is decoded by the receiving client and reverts to the entity body before encoding.
There is a mechanism called Transfer Coding in HTTP/1.1, which can be transmitted in a certain encoding mode during communication, but is only defined for block transmission encoding.
A collection of multipart objects
When sending an email, we can write text in the email and add multiple attachments. This is due to the adoption of the Multipurpose Internet Mail Extensions (MIME) mechanism, which allows Mail to handle multiple types of data such as text, images, and videos.
This is similar in Http. The main objects include:
multipart/form-data
: Used when uploading a Web formmultipart/byteranges
: Status code 206(Partial Content) Used when the response packet contains multiple types of Content.
Using these parameters is specified using content-Type in the back field.
Use the Boundary string to divide various entities specified by a collection of multi-part objects. Insert — marks on each type of entity specified by the Boundary string, and insert — marks at the end of the string corresponding to the multi-part object set.
A header field can be contained in each part type of a multi-part object collection, and a multi-part object collection can be nested within a part. Refer to RFC2046 for a more detailed explanation of multi-part object collections.
Scope of the request
A Range Request is a Range Request.
For a resource of 10000 bytes, if a range request is used, only resources of 5001 to 10000 bytes can be requested.
<! -- 5001 ~ 10 000 --> Range: bytes=5001-10000 <! -- 5001 ~ --> Range: bytes=5001- <! -- 0~3000, 5000~7000 --> Range: bytes=-3000, 5000-7000Copy the code
Content negotiation returns the most appropriate content
If the default language of the browser is English or Chinese, the English or Chinese version is displayed when you access the web page with the same URL. Such a mechanism is called Content Negotiation.
Content negotiation mechanism means that the client and the server negotiate the resource content of the response, and then provide the most suitable resource to the client. Content consultation is based on the language, character set, encoding method of the responding resource.
Main reference:
- accept
- accept-charset
- accept-encoding
- accept-language
- content-language
There are three types of content negotiation techniques:
- Server-driven Negotiation
- Client-driven Negotiation
- Transparent Negotiation
HTTP response status code
2 xx success:
- 200 OK: indicates that the request from the client is correctly processed on the server ✨
- 201 Created, the request has been implemented, and a new resource has been Created based on the request
- 202 Accepted, request not executed, no guarantee of completion
- 204 No content: Indicates that the request is successful, but the response packet does not contain the body of the entity
- 206 Partial Content, make scope requests ✨
3XX Redirection:
- 301 Moved permanently, permanently redirects: indicates that the resource has been assigned a new URL
- 302 Found, temporary redirection, indicates that the resource has been temporarily assigned a new URL ✨
- 303 See Other: indicates that another URL exists for the resource and should be used
GET
Method directed resource fetching (transforms request methods) - 304 Not Modified: indicates that the server allows access to the resource but the request condition is not met
- 307 Temporary redirect indicates the same meaning as 302, but does not change the request mode
4XX client error:
- 400 Bad Request, syntax errors exist in the request packet ✨
- 401 Unauthorized: The request to be sent requires authentication information authenticated through HTTP ✨
- 403 Forbidden: Access to requested resources is denied by the server ✨
- 404 not found: The requested resource was not found on the server ✨
- 408 Request timeout: The client Request times out
- 409 Confict, requested resources may cause conflicts
5XX server error:
- 500 Internal sever error: an error occurred when the server executed the request ✨
- 501 Not Implemented Requests are beyond the scope of the server. For example, the server does Not support a function required for the current request or the request is a method Not supported by the server
- 503 Service Unavailable: The server is temporarily overloaded or is being stopped for maintenance and cannot process requests
- 505 HTTP Version Not Supported The server does not support the HTTP version or refuses to support the HTTP version used in the request
307, 303, 302
302 is the protocol status code of HTTP1.0. In http1.1, there are two additional 303 and 307 to refine the 302 status code.
303 explicitly states that the client should use the GET method to obtain the resource, and that it will turn the POST request into a GET request for redirection. 307 will comply with browser standards and will not change from POST to GET.
302 and 301 impact on search engines
302 and 301 are both redirect status codes. The difference is that 301 is a permanent redirect and 302 is a temporary redirect.
301 redirection is the best way to be search engine friendly after a web page has changed its address, as long as it is not temporarily moved, it is recommended to use 301 for redirecting. If we put an address using a 301 jump jump, the search engine will take the old address PageRank information to the new address, at the same time in the search engine index library completely abandoned the original old address. The ranking of the old website is completely cleared
302 stands for Temporarily Moved. Black Hat SEOs used this technique extensively in previous years, and the major search engines have now stepped up their crackdown. Google punished Business.com a few years ago and BMW’s German site more recently. Even if a site is not objectively spam, it is easy for search engines to misinterpret it as spam and be punished.
::tip spam spam, a search engine spam technology that uses unethical techniques to improve search engine rankings. May cause a search engine to permanently delete your site from its database.
Common search engine spam techniques:
- Hidden text
- Repeat key
- Use irrelevant keywords
- Hide the label
- The same or similar page
- Page switching technology
- There is no content
- The transition to submit
- Link search engine spam technology
: : :
HTTP headers
HTTP requests are classified into request packets and response packets.
In a request packet, it consists of methods, URIs, HTTP versions, and HTTP header fields.
The corresponding packet consists of HTTP version, status code, and HTTP header field.
The first field is the most abundant.
HTTP header fields are classified into four categories based on actual traffic:
- Common header field: Header field used by both request and response packets
- Request header field: the header field used by the client to send a request packet to the server, which provides additional information about the request, client information, and priority information about the response content
- Response header field: The header used when a response packet is returned from the server to the client. It supplements the additional content of the response and requires the client to attach additional content information
- Entity header field: The header used in the entity part of the request message and response message, which supplements the information about the update time of the resource content and the entity
General Header Fields
Headers used by both request and response packets:
Cache-Control
Control cache ✨Connection
Connection management, hop – by – hop header ✨Upgrade
It can be used to check whether HTTP and other protocols are available and communicate with a higher version.via
Proxy server information can be used to trace the transmission paths of request and response packets between clients and servers. It can also be used to avoid request loops, often in conjunction with trace requests.Wraning
Warning users about cache-related issues in the following format:Warning: [Warning code] [Warning host: port number]" [Warning content]" ([date and time])
Transfor-Encoding
The transmission coding format of the packet body ✨Trailer
Which header fields are recorded after the body of the message can be used to block the encoding.Pragma
Message instructions, which are legacy fields, exist only for compatibility.Date
Date when the packet is created
Cache-control Controls the Cache
The parameters of this instruction are optional, and multiple instructions are separated through each other. The header field cache-Control can be used in requests and responses.
For details, see browser/cache policy
Connection Management, hop – by – hop header
The head has two functions:
- Controls header fields that are no longer forwarded to agents:
Connection: header field name that will not be forwarded
- Managing persistent links:
Connetion: close
The HTTP/1.1 version is persistent by default. As long as the TCP connection is open, HTTP requests can be sent indefinitely. In version 1.0 there were connection restrictions and rules. I’m not going to expand the narrative here. You can do this by executingclose
Explicitly close the connection. Settings onlyConnection: Keep-Alive
Behind,keep-alive
Field takes effect.
Request header field
Header used by the client to send request packets to the server
Accept
Media types that the client or agent can process ✨Accept-Encoding
Preferred processable encoding formatAccept-Language
Priority to processable natural languageAccept-Charset
Preferred character set that can be processedAuthorization
Web authentication information ✨Host
Request resource LocationThe serverIs the only header field in the HTTP/1.1 specification that must be included in the requestExpect
Expect specific behavior from the serverIf-Match
Compare the entity tag Etag, and the server will only accept the request if the two values matchIf-None-Match
Compare entity tag (ETage) as opposed to if-match ✨If-Modified-Since
Compare resource update time (last-modified) ✨If-Unmodified-Since
Compare the resource update time (last-modified) as opposed to if-modified-since ✨If-Ranges
Tells the server if specifiedif-range
Field values (ETag value or time) that match the ETag value or time of the requested resource are processed as scope requests. Instead, all resources are returnedRange
The byte range request for the entity is ✨Proxy-Authorization
The proxy server requires Web authentication informationFrom
Email address of the userUser-Agent
Client program information ✨Max-Forwrads
The maximum hop count, minus one for each proxy server, can be used to check the communication of the request path.TE
Transfer encodingThe priority of theReferer
Requesting the original partyurl
To know which Web page the URI originated from
Response header field
Fields used responsively from server to client:
Accept-Ranges
Acceptable range of bytes that can be sent when a range request cannot be processedAccept-Ranges: none
Age
Tells the client how long ago the source server created the response, with a field value in secondsETag
A string that can represent a resource unique resource ✨Location
The URI that the client is redirected to, provided at ✨, is basically used in conjunction with 30x redirection requestsProxy-Authenticate
The proxy server requires authentication information from the clientRetry-After
The header field, used with status code 503, indicates when the server will be next requestedServer
Server information ✨vary
Proxy server cache information control, such asVary: Accept-Language
Indicates that only requests in the same natural language can be returned to the cacheWWW-Authenticate
The server requires authentication information from the client
Entity head field
Headers are used for the entity portion of request and response messages
Allow
Resources can support HTTP request methods ✨Content-Encoding
The encoding format of the entity, includinggzip
.compress
.deflate
.identity
Four kinds ofContent-Language
The resource language of the entityContent-Location
Substitute resourceuri
, andLocation
Different, this field indicates the URI corresponding to the resource returned by the packet subjectContent-Length
Entity size (bytes)Content-Type
Physical media typesContent-MD5
Summary of entity packets. This field cannot verify whether the content is tamperedContent-Range
For a scope request, the client can be told which part of the entity returned as a response complies with the scope requestLast-Modified
Resources Last modified resources ✨Expires
Expired resources for entity principals ✨
Non-http /1.1 header field
In addition to these fields, there are the more common cookie, set-cookie, content-Disposition, and other header fields defined in other RFCS that are also used very frequently.
End-to-end / Hop-by-hop
HTTP header fields are defined as the behavior of cached and uncached proxies, divided into two types.
- End-to- End Header: The Header in this category is forwarded to the final recipient of one of the request/response pairs, must be stored in the response generated by the cache, and must be forwarded.
- Hop-by-hop Header: The Header in this category is valid only for a single forward and will not be forwarded because it passes the cache or proxy. In HTTP/1.1 and later, if you want to use this header, you need to provide it
Connection
The first
Hop-by-hop header fields are:
- Connection
- Keep-Alive
- Proxy-Authenticate
- Proxy-Authorization
- Trailer
- TE
- Transfer-Encoding
- Upgrade
Cookie related header field
- Set-cookie: Indicates the cookie information used to start the status management, and responds to the header field
- Cookie: Indicates the cookie information received by the server
This section refers to the browser to store article content
Other header fields
- X-frame-options: belongs to the HTTP response header, which controls the display of the website content in the frame tag of other Web sites. Its main purpose is to prevent clickjacking attacks. There are two fields that can be specified:
- DENY: refused to
- SAMEORIGIN: allowed only when pages under the SAMEORIGIN domain name match
- X-xss-protection: a policy against cross-site scripting attacks to enable or disable the browser’s XSS Protection mechanism
- 0: Sets XSS filtering to invalid state
- 1: XSS filtering is set to the valid state
- DNT: Do Not Track. It means a way of refusing to be tracked by targeted advertising
- 0: Consent to be tracked
- 1: Refuse to be tracked
- P3P(The Platform for Privacy Preferences) technology can make personal Privacy on web sites into a form that can only be understood by programs, so as to achieve The purpose of protecting user Privacy.
To set up P3P, you need:
- Create P3P privacy
- Create a P3P privacy control file named save in
/w3c/p3p.xml
- After creating Compact Policies from P3P Privacy, print it to the HTTP response.
HTTP2
HTTP2 significantly improves web performance compared to http1.x, and further reduces network latency on the basis of full compatibility with HTTP1.1. For front-end developers, it undoubtedly reduces front-end optimization.
HTTP/2: The Future of the Internet
The technical solution is here: RFC 7541
multiplexing
It is well known that in the HTTP/1.1 protocol, browser clients are at the same time. The number of requests under the same domain name is limited. Requests exceeding the limit are blocked.
The multiplexing of HTTP2 allows multiple request-response messages to be sent from a single HTTP2 link.
HTTP/2 reduces the basic unit of HTTP communication to frames that correspond to messages in the logical flow and exchange messages bidirectionally on the same TCP connection in parallel.
Binary framing
HTTP2 adds a binary framing layer between the application layer (HTTP/2) and transport layer (TCP/UDP), with compatibility with HTTP1.x.
In this binary framing layer, HTTP2 splits all transmitted information into smaller messages and frames and encodes them in binary format, where the headers of http1. x are encapsulated in HEADER frames. The corresponding Request Body is wrapped inside the DATA Frame.
HTTP/2 traffic is done over a single connection that can host any number of two-way data streams.
In 1.x, the key to HTTP performance optimization is not high bandwidth, but low latency. A TCP connection adjusts itself over time, initially limiting the maximum speed of the connection and increasing the speed of the transfer over time if the data transfer is successful. This tuning is called TCP slow start.
HTTP2 enables more efficient use of TCP connections by allowing all data flows to share a single connection, allowing high bandwidth to serve HTTP’s performance gains.
In a word:
- The mode of single connection with multiple resources reduces the pressure on the server, occupies less memory, and has higher connection throughput
- Network congestion is improved due to the reduction of TCP connections, and the reduction of slow start time makes congestion and packet loss recovery faster.
Request priority
By dividing HTTP messages into many individual frames, you can further optimize performance by optimizing the interleaving and transmission order of these frames. Each stream can have a priority value of 31 bits: 0 for highest priority and 2^ 31-1 for lowest priority.
Weight: The server can control resource allocation (CPU, memory, bandwidth) based on the priority of the stream, and the priority sends the highest priority frame to the client when the response data is ready. Streams with higher priority are sent first. But not always, because it introduces the problem of first-queue blocking: high-priority requests block the delivery of other resources.
A mixture of priorities is also required to allocate processing resources and bandwidth between clients and servers. The client can specify which stream is important.
Flow dependencies: Each flow can explicitly depend on another flow, and the client can use the combination of weight and flow dependencies to construct and pass a “priority tree” to the server. The tree indicates how it wishes to receive the response. That is, we expect higher priority requests to be responded to faster.
There is a default priority in the browser. Based on the browser’s own interpretation of the importance of the resource:
- Highest priority: HTML
- High priority: CSS
- Priority: js
- Low priority: Other resources such as images
The first compression
Http1. X does not support header compression, hence SPDY, which uses DEFLATE, and HTTP2, which uses a dedicated HPACK algorithm. Reduces the header size. An index table is maintained at both ends to record the headers that have been recorded. The key names of recorded headers can be transmitted later in the transmission process. After receiving data, the peer end can find the corresponding value by the key names.
Why compress
In HTTP/1, HTTP requests and responses are composed of a status line, a request/response header, and a message subject. In general, the body of the message is either gzip compressed or itself transmitted as a compressed binary (such as an image or video), but the status line and header are not compressed at all and are transmitted as plain text
As Web functionality becomes more complex, the number of requests per page increases. According to HTTP Archive, the average page currently generates hundreds of requests. More and more requests result in more and more traffic being consumed in the header.
In the HTTP/1 era, there are many optimizations that can be tried to reduce the amount of traffic consumed by headers, such as merging requests, enabling cookie-free domains, and so on, but these are somewhat problematic.
The technology principle
In layman’s terms, header compression needs to be between HTTP/ 2-enabled browsers and servers:
- Maintain an identical Static Table with common header names and particularly common header names and combinations of values
- Maintain the same Dynamic Table for adding content on the fly
- Support for Huffman Coding based on Static Huffman code table
Static dictionaries serve two purposes:
- For perfectly matched header key-value pairs, such as: Method: GET, you can use a single character
- For key-value pairs whose header names match, such as Cookie: XXXX, the name can be represented by a single character.
The static dictionary in HTTP/2 looks like this (httpwg.org/specs/rfc75…) :
Index | Header Name | Header Value |
---|---|---|
1 | :authority | |
2 | :method | GET |
3 | :method | POST |
4 | :path | / |
5 | :path | /index.html |
6 | :scheme | http |
7 | :scheme | https |
8 | :status | 200 |
… | … | … |
32 | cookie | |
… | … | … |
60 | via | |
61 | www | -authenticate |
At the same time, the browser can tell the server to add cookie: XXX to the dynamic dictionary so that the entire subsequent key-value pair can be represented by a single field. Similarly, servers can update each other’s dynamic dictionaries. Note that dynamic dictionaries are context dependent and need to be maintained for each HTTP/2 connection.
Compression can be greatly improved by using dictionaries, where static dictionaries are available on the first request. For content where static and dynamic dictionaries do not exist, Huffman coding can also be used to reduce the volume. HTTP/2 uses a static Huffman code table, which also needs to be built into the client and server.
In addition, HTTP/1 Status line information (Method, Path, Status, etc.) is split into key-value pairs in HTTP/2 and placed in headers (those beginning with colons), which also enjoy dictionary and Huffman compression. In addition, all header names must be lowercase in HTTP/2.
Server push
In HTTP 2.0, a server can actively push other resources in response to a request from a client.
It can be imagined that the client will definitely request some resources. In this case, the server side push technology can be adopted to push the necessary resources to the client in advance, which can reduce the delay time relatively. Of course you can use Prefetch if the browser is compatible.
Server-side push has another big advantage: it can be cached. This means that cache resources can be shared between different pages under the same origin policy.
Note that server-side push has two things:
- Push follows the same origin policy
- This server push is determined based on the request response of the client.
When a server wants to PUSH a resource, it sends a Frame of Type PUSH_PROMISE that contains the Stream ID to be created by PUSH. This tells the client: I’m going to send you a message with this ID. When the client parses the Frame and finds that it is of type PUSH_PROMISE, it accepts the stream that the server is pushing.
Performance bottleneck
Enabling HTTP/2 provides significant performance gains, but also introduces new performance bottlenecks. Because all the pressure is now concentrated on the bottom TCP connection, TCP is likely to be the next performance bottleneck, such as the problem of blocking at the head of TCP packet. The loss of a single TCP packet leads to the blocking of the whole connection, which is inevitable, and all messages will be remembered at this time.
Nginx upgrade HTTP2
- Nginx versions later than 1,9,5
- Installation of the
--with-http_ssl_module
and--with-http_v2_module
- HTTPS is configured (a prerequisite for enabling HTTP2)
- configuration
listen 443 ssl http2
nginx restart
Be careful not to be directnginx -s reload
)
HTTP3
HTTP3 is a way to maintain QUIC stability while using UDP to achieve high speeds without sacrificing TLS security.
Overview of the QUIC protocol
QUIC(Quick UDP Internet Connections) is a UDP based protocol that leverages the speed and efficiency of UDP while integrating the advantages of TCP, TLS, and HTTP/2. A picture can clearly show the relationship between them.
QUIC is a transport layer protocol used to replace TCP, SSL/TLS, and an application layer on top of the transport layer. We are familiar with the application layer protocols such as HTTP, FTP, IMAP, etc. All of these protocols can theoretically run on QUIC. The protocol running on QUIC is called HTTP/3, which is the meaning of HTTP over QUIC
So there’s no getting around QUIC to understand HTTP/3. Here are a few important QUIC features.
0 RTT Establishes the connection
RTT: round-trip time, including only the round-trip time
It takes 3 RTT to establish an HTTP/2 connection, and 2 RTT if session reuse is considered, i.e. the symmetric key calculated from the first handshake is cached. Further, if TLS is upgraded to 1.3, then HTTP/2 connections cost 2RTT and consider session reuse costs 1RTT. This can be simplified if HTTP/2 is not in a hurry for HTTPS, but virtually all browsers are designed to require HTTP/2 to be based on HTTPS.
HTTP/3 requires only 1RTT for the first connection and 0RTT for subsequent links, meaning that the client sends the first packet to the server with the request data. The main connection process is as follows:
- First connection, client sends
Inchoate Client Hello
, to request a connection; - The Server generates g, p, a, calculates a according to g, p, a, and then puts g, p, a into the Server Config before sending
Rejection
Message to the client. - After receiving g, P, and A, the client generates B, calculates B according to G, P, and A, and calculates the initial key K according to A, P, and B. After b and K are calculated, the client encrypts HTTP data with K and sends THE data together with B to the server.
- After receiving B, the server generates the same key as the client based on a, P, and B, and uses the key to decrypt the received HTTP data. For further security (forward security), the server will update its own random number A and the public key, generate a new key S, and then pass the public key
Server Hello
Send to the clientServer Hello
Message, and HTTP return data.
In this case, the DH key exchange algorithm is used. The core of the DH algorithm is that the server generates a, G, and P3 random numbers, which are held by A. G and P are transmitted to the client, and the client generates B. In this process, A and B do not participate in network transmission, which greatly improves security. Because P and G are large numbers, even if p, G, A and B are hijacked in network transmission, it cannot be cracked by modern computing power.
Connect the migration
TCP connections are based on quads (source IP address, source port, destination IP address, and destination port). At least one factor changes during network switchover, resulting in the change of connection sending. When the connection sending changes, if the original TCP connection is still used, the connection will fail and the original connection will have to be re-established after the timeout, so we sometimes find that when we switch to a new network, even though the network is in good condition, the content still takes a long time to load. If implemented well, a new TCP connection is established as soon as a network change is detected, and even then it can take hundreds of milliseconds to establish a new connection.
QUIC is not affected by the quaternions, and the original connection remains when the four elements are changed. The principle is as follows:
Instead of four elements, QUIC uses a 64-bit random number called the Connection ID. Even if the IP address or port number changes, as long as the Connection ID does not change, the Connection can still be maintained.
Queue head blocking/multiplexing
HTTP/1.1 and HTTP/2 both have Head Of Line blocking.
TCP is a connection-oriented protocol. After sending a request, an ACK message is sent to confirm that the object has received the data. It would be inefficient to request each request after receiving an ACK message from the previous request. Later HTTP/1.1 introduced Pipeline technology that allowed a SINGLE TCP connection to send multiple requests simultaneously. This improves transmission efficiency.
Against this background, queue head blocking occurs. For example, if a TCP connection sends 10 requests at the same time, 1,2,3 of them are sent to the client, but the fourth request is lost, then the next 5 to 10 requests are blocked. The fourth request needs to be processed before it can be processed. This wastes bandwidth resources.
Therefore, HTTP generally allows each host to establish up to six TCP connections, which can make better use of bandwidth resources. However, the problem of blocking each connection header still exists.
HTTP/2 multiplexing solves the above queue head blocking problem. In HTTP/2, each request is split into multiple frames that are transmitted simultaneously over a TCP connection, so that even if one request is blocked, other requests are not affected.
However, although HTTP/2 can solve the problem of blocking at this granularity of request, the underlying TCP protocol of HTTP/2 also has the problem of queue header blocking. Each HTTP/2 request is split into multiple frames, and the frames of the different requests are combined into a Stream, which is the logical transport unit on TCP. Thus HTTP/2 reaches the goal of sending multiple requests simultaneously on a single connection, where Stram1 has been properly delivered. In Stram2, the third Frame is lost. TCP processes data in strict order. The first Frame to be sent must be processed first, so the sender is required to re-send the third Frame.
Not only that, but because HTTP/2 must use HTTPS, and HTTPS uses THE TLS protocol, there is queue header blocking. TLS organizes data based on the Record, encrypts a pair of data together, and then splits the data into multiple TCP packets for transmission. Generally each Record 16K, contains 12 TCP packets, so if 12 TCP packets have any packet loss, then the whole Record can not be decrypted.
Queue header blocking can cause HTTP/2 to be slower than HTTP/1.1 in weak network environments where packet loss is more likely.
How does QUIC solve the problem of queue head blocking? There are two main points:
- The transmission unit of QUIC is Packet, and the encryption unit is Packet. The whole encryption, transmission and decryption are based on Packet, which can avoid the blocking problem of TLS.
- QUIC is based on UDP. UDP packets are processed in no order at the receiving end. Even if one packet is lost, the whole connection will not be blocked. Other resources are handled normally.
Congestion control
The purpose of congestion control is to prevent too much data from flooding into the network at a time, causing the network to exceed the maximum load. QUIC’s congestion control is similar to and improved upon TCP. Let’s take a look at TCP’s congestion control.
- Slow start: The sender sends one unit of data to the receiver, then two, then four, and then eight exponentially. During this process, the congestion level of the network is constantly tested.
- Avoid congestion: exponential growth becomes linear after a certain limit is reached
- Fast retransmission: The sender sets a timeout timer for each time the sender sends a packet. After the timeout, the packet is considered lost and needs to be retransmitted
- Fast recovery: On the basis of the above fast retransmission, the sender also starts a timeout timer when resending data. If the sender receives an acknowledgement message, the sender enters the congestion avoidance phase. If the sender still times out, the sender returns to the slow start phase.
QUIC reimplements the congestion control of Cubic algorithm in TCP protocol. The following are the characteristics of QUIC’s improved congestion control:
1. The hot plug
In TCP, you need to modify the congestion control policy at the system level. QUIC only needs to modify the congestion control policy at the application layer, and QUIC dynamically selects the congestion control algorithm according to different network environments and users.
2. Forward error correction FEC
QUIC uses the FEC (Forword Error Correction) technique to increase the tolerance of the protocol. Is a piece of data segmentation for 10 package, once for each package, of an exclusive or operation result with packet is transmitted as FEC package, if there is a packet loss in transit, you can according to the remaining nine bags and FEC data calculated lost the package, thus greatly increase the fault tolerance of the agreement.
This is in line with the current stage of network transmission technology, bandwidth is no longer the bottleneck of network transmission, round-trip time is, so the new network transmission protocol can appropriately increase data redundancy, reduce retransmission operations.
3. Monotonically increasing Packer Number
To ensure reliability, TCP uses Sequence Number and ACK to confirm whether messages arrive in order. However, this design has defects.
After the timeout occurs, the client initiates a retransmission and receives an ACK confirmation message. However, the client does not know whether the ACK corresponds to the original request or the retransmission request because the ACK message received by the original request is the same as that received by the retransmission request. This creates ambiguity.
- RTT: Round Trip Time
- RTO: Retransmission Timeout: indicates the Retransmission Timeout
If the client thinks it is a retransmitted ACK, but it is actually the case on the right, RTT will be small; otherwise, RTT will be large.
QUCI solves the above ambiguity problem. Different from Sequence Number, Packet Number is monotonically increasing. If Packet N is lost, the Packet identifier will not be N when retransmitting, but a Number larger than N, such as N+M. In this way, when receiving the acknowledgement message, the sender can easily know the original ACK request or the retransmission request.
4. ACK Delay
TCP calculation of RTT does not consider the Delay between receiving the confirmation message from the data sender, as shown in the following figure. This Delay is ACK Delay. QUIC considers this Delay, making RTT calculation more accurate.
More ACK blocks
In general, a receiver should send an ACK recovery after receiving a message from a sender, indicating receipt of data. However, it is too troublesome to return an ACK for each received data. Therefore, it usually does not reply immediately, but replies after receiving multiple data. TCP SACK provides up to three ACK blocks. However, in some scenarios, such as download, the server only needs to return data, but according to the design of TCP, every three packets need to return an ACK, and QUIC can carry 256 ACK blocks at most. Under the network with serious packet loss rate, more ACKS can reduce the retransmission amount. Improve network efficiency.
Browse control
TCP controls the flow of each TCP connection. Traffic control means that the sender should not send too fast and the receiver should receive the data in time. Otherwise, data overflow and loss may occur. It can be seen that congestion control mainly controls the sending policy of the sender, but does not consider the receiving ability of the receiver, so traffic control is not able to control part of the capacity.
QUIC only needs to establish a connection and simultaneously transmit multiple streams on this connection, just like a road with a warehouse for each volume and a lot of vehicles transporting materials on the road. QUIC has two levels of flow control: Connection Level and Stream Level.
For flow control of a single Stream: Before the Stream transmits data, the flow control Recevice window is the maximum receiving window. As the receiver receives data, the receiving window keeps shrinking. In the received data, some data has been processed and some data has not been processed. As shown in the figure below, the blue block represents processed data, while the yellow block represents violated processed data. The arrival of these data makes the receiving window of Stream shrink.
As data is processed, the receiver is able to process more data. When (Flow Control Receivce offset-Consumed Bytes) < (Max Receive window/2), the receiver sends a WINDOW_UPDATE frame to tell the sender that you can send more data, At this point, the flow control receive offset will be offset, the receiving window will be enlarged, and the sender can send more data to the receiver.
The Stream level is of limited use to prevent the receiver from receiving too much data, and the flow control at the Connection level is needed. Now that you understand Stream flow, you can easily understand flow control in Connection. In the Stream,
Receive window = Maximum Receive window - Received dataCopy the code
For Connection:
Receive window = Stream1 receive window + Stream2 Receive window +... + StreamN receive windowCopy the code
Refer to the link
- The illustration of HTTP
- Zhihu – what are the major improvements of HTTP/2.0 compared to 1.0?
- HTTP/2 header compression technology introduction
- HTTP2
- Read this article for an in-depth understanding of the HTTP2.0 protocol!
- HTTP/3 principles in action