preface
If TCP/IP protocol is the foundation of Internet communication, then HTTP is the king of it. It can be seen in games and news in daily life and double Eleven. According to The statistics of NetCraft, there are at least 1.6 billion websites and more than 200 million independent domain names in the world. And the underlying mechanism of this huge network world is HTTP. It is no exaggeration to say that no HTTP communication!
Voiceover: The TCP/IP Protocol group is as follows. IP is short for Internet Protocol, not an IP address
HTTP is so widely used that we really need to learn more about it. Not only will it help us understand and explain the phenomena and principles of forced refresh and anti-theft at work, but it will also give us some inspiration when designing open source middleware, such as MQ and Dubbo components, where protocol design is the first priority. You can more or less see the HTTP protocol in it, learning HTTP can give you a good idea when designing component protocols such as middleware
This article will take a thorough look at HTTP design concepts and help you master them thoroughly.
- What is HTTP, what are its features, and why is HTTP versatile
- Why is anti-crawl a false proposition
- A brief introduction to HTTP 0.9, 1.1, 2.0, 3.0 features
- A colleague of our company once threw a problem in a group: he entered the url of the picture in the address bar of the browser to preview the picture, but the result turned out to be downloading the picture. Could you explain the reason for him
- What do you know about DNS protocols and what is DNS load balancing
- What is the only field that HTTP 1.1 requires the request header to pass, and what does it do
- Does no-cache really mean no cache? How much do you know about HTTP caching
- Why does a browser refresh not catch requests, but a force-refresh does? What does the force-refresh do
- What’s the difference between 301 and 302
- The relationship of various protocols to HTTP
- What is the difference between GET and POST
This article will cover HTTP from the following points
- What is the HTTP
- The various protocols associated with HTTP
- 1. The URI and URL
- 2. The TCP/IP protocol
- 2. The DNS protocol
- HTTP Packet Format
- Request message format
- Response message format
- Request and response headers
- Common header field
- The HTTP 2.0 overview
- 1. Head compression
- Binary format
- 3. The flow
- The HTTP 2 header is blocked
- Summary: FEATURES of HTTP
- Answer the opening question
What is the HTTP
HTTP: HyperText Transfer Protocol HTTP: HyperText Transfer Protocol HTTP: HyperText Transfer Protocol
hypertext: that is, “text beyond ordinary text”, that is, a mixture of audio and video, pictures and files. Many common web pages are embedded with img, video and other tags parsed and displayed as pictures and videos. In addition to these hypertext contents, the most important thing is that hypertext containshyperlinksThe Internet is made up of hyperlinks, which means that links can be clicked from the hypertext of documents such as web pages to other pages.
transmission: Transmission means that there are at least two participants, such as A and B. This means that HTTP is A two-way protocol, which generally transfers “hypertext” from A to B or B to A in the form of binary packets according to the agreed protocol. A <===> B, we call the requestor who initiates the request, The party that returns the data after receiving the request is called the responder, but it is important to note that the transmission is not limited to two participants.A relay or relay is allowed, as long as the participants follow the agreed protocol. As shown, a transport can have multiple participants, as long as the appropriate protocol is followed
Protocol: HTTP is a protocol. What is a protocol? Agreement in their daily lives are not uncommon, such as when we rent the rental agreement signed, the labor contract signed after induction and enterprise agreement, “associate” means at least two people to participate in, “discussion” means that both sides will agree on certain terms, such as rental agreement month pay $xx, xx yuan a month, the labor contract agreement A protocol is a constraint on both sides of the communication. Only when the two sides transmit data according to the agreement can they understand each other’s meaning. Otherwise, it is chicken talking with duck.
After the above explanation, we can give a more accurate definition of HTTP:
HTTP is a convention and specification for the transfer of hypertext data, such as text, pictures, audio, and video, between two points in the computer world.
The various protocols associated with HTTP
HTTP, despite its statistical status in today’s Internet traffic, must be supported by other protocols or specifications for it to work
1. The URI and URL
First of all, since we’re transferring hypertext between two points, what’s the representation of the hypertext? Hypertext is resources, so many resources on the Internet, how to uniquely mark the resources on the Internet.
You can use Uniform Resource Identifiers (URIs) to uniquely locate resources on the Internet. A Uniform Resource Locator (URL) is a subset of URIs. The difference is that URIs define resources. A URL not only defines the Resource, but also how to find the Resource.
A URL consists of four parts: protocol, host, port, and path
Protocol: that is, the transmission protocol specified by the communication parties. Of course, HTTP is the most widely used protocol introduced in this paper, in addition to FTP, Mailto, file and other protocols.
Hostname: indicates the hostname or IP address of the server that stores resources. Sometimes, the server needs to provide the username and password for user authentication for security reasons. In this case, add username:password before hostname.
Port: an integer (optional). If omitted, use the default port of the solution. There are default ports for various transport protocols, such as HTTP port 80 and HTTPS port 443.
Path: Indicates the directory or file address of a host where resources are stored
Parameter: Optional for specifying special parameters.
Query: An optional query string used to pass parameters to a dynamic web page or interface. Multiple parameters can be separated by ampersand (&), and the name and value of each parameter are separated by =.
Fragment: dedicated for the browser. It is used to specify a fragment in network resources. After the fragment is specified, you can directly locate the fragment on the web page.
Examples are as follows
2. The TCP/IP protocol
If you specify a host name such as www.example.com, the URL will be resolved to an IP address by DNS and then the communication will start. Why should the host name be resolved to an IP address? HTTP uses the TCP/IP protocol stack. The protocol stack is defined in this way. Let’s take a look at the functions of each layer of the TCP/IP protocol stack.
The TCP/IP stack has four layers
- The link layer is responsible for sending raw packets over Ethernet and WIFI networks. It works at the nic layer and uses MAC addresses to mark devices on the network
- Internet Layer: This layer provides routing and addressing functions, enabling two terminal systems to connect and determine the optimal path, and provides congestion control and traffic control capabilities. It’s just as important to have an address when sending mail
- transport layer: Transmission layer, the layer of the protocol for the application process to provide end-to-end communication services, this layer is mainly TCP, UDP two protocols, TCP provides connection-oriented data flow support, reliability, flow control, multiplexing and other services, UDP does not provide complex control mechanism, using IP to provide connection-oriented simple message transmission
- application layer: The application layer, the front three layers has laid a solid foundation for the network communication, this layer can play a lot of space is big, the application layer protocol can imagine for different services, each application layer protocol is born in order to solve the problem of one kind of application, each service need to use different protocols, rules are followed in the application process in communication protocol.
We can think of the first three floors as the infrastructure of the expressway, and it is up to the application layer to decide what goods to transfer and whether to close the expressway.
When the TCP/IP protocol family is used for network communication, the communication with the peer party is hierarchical and sequential. In each layer, the data to be sent is appended with a header that contains information necessary for the layer, such as the destination address to be sent and protocol-related information.
Once the receiver receives the data, each layer similarly parses its header field until the application layer receives the corresponding data.
In such a hierarchical way, each layer, as long as the mind your own work, good scalability, such as for HTTP, it is the underlying can use TCP, UDP can also be used to transport, which day more cow force of the agreement, if it can replace it, does not affect the lower level, which is more famous stratification theory in the computer: There is nothing that can’t be solved by layering. If there is, then there is another layer.
The IP packet header defines the 32-bit source IP address and destination IP address, as shown in the following figure
Therefore, the application layer must know the IP address of the peer before transmitting data.
2. The DNS protocol
From the previous section, the request needs to know the IP address of the other party in advance, but the IP address is composed of “161.117.232.65” such numbers, normal people can not remember, think about it, if I want to go on Baidu, but also know its IP address, that is not crazy, how to do, contact life scene, think about it, If we want to call someone and can’t remember their phone number, should we look up someone’s name in the phone book and then call them? The phone book is used to translate the name into a phone number.
Similarly, a normal person can only remember websites like Baidu.com, so there needs to be a translator like a phone book to translate web addresses into IP addresses. That’s what DNS does
In order to understand the working mechanism of DNS, first we need to understand the hierarchical structure of the domain name. Websites like www.baidu.com are also called domain names, which is a hierarchical structure. The one on the right is called top-level domain names. Then there are secondary domain names, which descend to the left, with the host name on the far left, usually used to indicate the purpose of the host, such as “WWW” for providing world Wide Web services
Of course, this is not absolute, the key is just convenient for us to remember it.
Domain name has hierarchical points, DNS also has hierarchical points, DNS core system is a three-layer tree, distributed structure, basically corresponding to the domain name structure.
- Root DNS Server: returns the IP addresses of top-level DNS servers such as com, CN, and net.
- Top-level DNS Server: an authoritative DNS Server that manages its own domain names. For example, the COM top-level DNS Server can return the IP address of the apple.com DNS Server.
- Authoritative DNS Server: Manages the IP address of the host with its own domain name. For example, the Apple.com Authoritative DNS Server can return the IP address of www.apple.com.
The root DNS server is the key, it must be well-known, find it, the following levels of DNS can be found, otherwise domain name resolution is not possible. Given the hierarchy of DNS, it is not difficult to guess the DNS resolution of the request www.apple.com as follows
- First, access the root DNS server and obtain the address of the COM top-level DNS server
- Request “com” top-level domain name server, return “apple.com” domain name server address
- Then go back to the apple.com domain name server and return to the address of www.apple.com
The above three layers we called DNS core parse system, so we think, the world’s PCS, app devices such as a dime a dozen, if each send a request to the DNS for IP, the estimated DNS system will fry, how to alleviate the pressure, the answer is to use the cache, in fact a lot of big companies, Or network operators will build their own DNS servers to act as proxies for user queries and replace the core DNS system for user requests. In this way, if found, the query records can be cached and the number of requests received again. If there are cached results or the cache has not expired, the original cached results can be directly returned. You may have heard of Google’s 8.8.8.8 DNS server. This is Google’s self-built DNS server. We call this kind of self-built DNS server “non-authoritative domain name server”.
In addition to non-authoritative DNS servers, there are browser caches and operating system caches (known as /etc/hosts is a type of OPERATING system DSN caches).
In this case, if you request www.example.com, the complete DNS resolution flow is as follows:
1. After entering www.example.com in the browser, the system checks whether the DNS cache of the browser is expired. If the DNS cache is not expired, the system directly obtains the DNS cache
A DNS resolver that requests a local address is usually managed by the user’s Internet service provider (ISP), such as a wired Internet service provider, DSL broadband provider, or corporate network. If you have a MAC, you can open DNS Servers in your network configuration to see the default DNS server provided by your ISP.
3. The DNS resolver forwards the request from www.example.com to the root DNS server, which returns the.com top-level DNS server address
4. The DNS resolver forwards the request from www.example.com again, this time to a TLD name server in the.com domain obtained in Step 3. The name server for the.com domain responds to the request with the names of the four Amazon Route 53 name servers associated with the example.com domain.
5. The Amazon Route 53 name server looks up the www.example.com record in the managed area of example.com to get relevant values, for example, the IP address of the Web server (192.0.2.44), And returns the IP address to the DNS resolver.
6. The DNS resolver obtains the REQUIRED IP address. The parser returns this value to the Web browser. The DNS resolver also caches (stores) the IP address of example.com for the duration you specify so that it can respond more quickly the next time someone browses example.com.
We can use the DIG tool to verify the above request flow
You can see that the request flow really matches our flowchart! In addition, we notice that four IP addresses are returned, so that clients can randomly select one of the requests. This is often referred to as DNS load balancing, which can effectively cache the server pressure.
HTTP Packet Format
Next, we introduce the HTTP packet format. To properly communicate with each other, the communication parties must follow the protocol to understand each other’s information. The protocol defines the format of HTTP request and response packets.
Both request and response messages are composed of four parts: “start line”, “header”, “blank line” and “entity”, but the start line is slightly different.
Request message format
Let’s look at the format of the request message
The following is an example:
There are several common types of request methods
1, GET: request URL specified resources, specified resources by the server to return the response content after parsing, GET method has idempotency, that is, no matter how many requests, will only return resources, and will not create or change additional resources, GET request only request header, not request body.
2. HEAD: Semantically similar to GET, but the response to HEAD is only the request header, not the request body
3, POST: mainly used to create, modify, upload resources, does not have idempotent, generally will request resources attached to the request body transmission
4, PUT: modify resources, basically do not use, because POST also has the semantics of modification, so basically online mostly use POST instead.
5. OPTIONS: List the methods that can be implemented on resources. This method is rare, but it will be used in cross-domain and is also important
Protocol Version Specifies the HTTP version supported by the client. Currently, HTTP versions 1.1 and 2.0 are commonly used. If 1.1 is specified on the request side, the responder uses HTTP 1.1 to reply after receiving the request.
Response message format
The format of the response line is as follows
The sample is as follows
The response packets contain the following five types of status codes:
- 1 x x: prompt, indicating that the protocol processing is in the intermediate state and subsequent operations are required.
- 2 x x: Yes, the packet is received and processed correctly.
- 3 x x: indicates redirection. The resource location changes and the client needs to resend the request.
- 4 x x: A client error occurs. The request packet is incorrect and the server cannot process the request packet.
- 5 x x: Server error. An internal error occurred when the server was processing the request.
Common status codes and their definitions are as follows (see the link at the end of this article for more details)
Status code | Status code cause phrase | paraphrase |
---|---|---|
100 | Continue | This temporary response indicates that everything so far is ok, and the client should continue to request and ignore it if it is done. |
200 | OK | Obtaining or creating a resource succeeded. Procedure |
201 | Created | The request was successful and a new resource was created to be used in the POST request |
206 | Partial Content | The server has successfully processed some of the GET requests. HTTP download tools such as FlashGet or Xunlei use this type of response to implement breakpoint continuation or break up a large document into multiple download segments at the same time. The request must contain a Range header indicating the Range of content the client expects, and may contain if-range as a request condition. |
301 | Moved Permanently | A permanent redirect means that the requested resource has been moved to the URL specified in the Location header and is fixed and will not change |
302 | Found | Temporary redirect. The redirect status code indicates that the requested resource has been temporarily moved to the URL specified in the Location header |
400 | Bad Request | The current request cannot be understood by the server. The client should not re-submit this request unless it is modified. 2. The request parameters are incorrect |
401 | Unauthorized | The current request requires user authentication. The response must include a WWW-Authenticate header applicable to the requested resource to ask for user information. |
403 | Forbidden | The server understands the request, but refuses to execute it, assuming that the user does not have permission |
404 | Not Found | The requested resource was not found on the server |
405 | Method Not Allowed | The request method specified in the request line cannot be used to request the corresponding resource. The response must return an Allow header representing a list of request methods that the current resource can accept |
500 | Internal Server Error | The server encounters an unexpected condition and prevents it from executing the request. |
502 | Bad Gateway | This error response indicates that the server as the gateway needs to get a response to process the request, but gets an incorrect response. |
Knowing these status codes, we can locate and troubleshoot problems in a timely manner. For example, if we see 5XX, we know that there is a problem with the server logic, and if we see 400, we know that there is a problem with the client request parameters. The other end does not need to be silly to troubleshoot.
Request and response headers
The header formats of request and response headers are basically the same. They are both in the key-value format. Key and value are both in: “Separate, in addition to the HTTP header field is very flexible, in addition to the use of standard Host, Connection and other header fields, you can also arbitrarily add a custom header, which brings unlimited extension to the HTTP protocol possible!
Common header field
The HTTP protocol specifies a very large number of header fields that can be used for a wide variety of functions, but they can be divided into four basic categories
- Generic fields: can appear in both request headers and response headers;
- Request field: Can only appear in the request header to specify further information about the request or additional conditions.
- Response field: the field can only appear in the response header and supplement the information about the response packet.
- Entity field: This is actually a generic field, but specifically describes additional information about the body.
In fact, the analysis and processing of HTTP packets is essentially the processing of the header field. HTTP connection management, cache control, content negotiation and so on are all processed by the header field. To understand the header field is basically to understand HTTP, so it is very important to understand the header field. Let’s take a look at what these header fields mean
1. Generic fields
Shows the first field name | | : — | — – | : – | cache-control Connection | | Control caching behavior by jump first, Connection management Date | create message Date/time Pragma | message instruction Trailer | Message at the end of the first general Transfer – Encoding | specified message transmission Encoding of Upgrade | upgraded to other agreement Via | proxy server information Warning | error notification
2. Request header fields
Shows the first field name | | : — | — – | : – | Accept | user agent can handle the media type of the Accept – Charset | priority set the Accept – Encoding | priority content Encoding the Accept – Language | priority language (natural language) Authorization | Web authentication information Expect | Expect the server specific behavior From the Host | the user’s E-mail address to be Match | | request to the server resources If entity tags (etags) The if-modified-since | more resources If update time – None – Match | compare entity tag (in contrast to the If – Match) If – Range | entity Byte Range of resources are not update request If Unmodified — Since Update time | more resources (in contrast to the If – Modified – Since) Max – recently | Proxy – maximum transmission line by the hop Authorization | Proxy server for client authentication information Range | byte Range request entity Referer | in the request URI of the original transfer encoding of the receiving party’s priority TE | priority transmission encoding the user-agent | HTTP client program information
3. Response header fields
Shows the first field name | | : — | — – | : – | Accept – Ranges | whether to Accept byte range request Age | calculated resources create ETag after time | resources matching the Location information of | server-assisted client redirect to the specified URI Retry – After | timing requirements for mounting another request Server HTTP Server installation information than | | proxy cache management information WWW – Authenticate | Server to the client’s authentication information If – Match | Compare entity tags (etags) the if-modified-since | more resources If update time – None – Match | compare entity tag (in contrast to the If – Match) If – Range | entity Byte Range of resources are not update request If Unmodified – Since | more resources update time (in contrast to the If – Modified – Since) Max – recently | Proxy – maximum transmission line by the hop Authorization | Proxy server for client authentication information Byte Range Range | entity requests Referer | the request URI of the original transfer encoding of the receiving party in TE | priority priority transmission encoding the user-agent | HTTP client program information
4. Entity header field
Shows the first field name | | : — | — – | : – | Allow | resources can support HTTP method Content – Encoding | entities subject for Encoding the Content – Language | entity of natural Language Content – Length | entity body size (unit: bytes) the Content – the Location | replace corresponding resources the URI of the Content – MD5 message digest of | entity subject Content – Range | the Location of the physical body The content-type | entity body media Type Expires | entity body expiration date/time
How to remember so many fields, rote certainly not, from the function of understanding will be easier to understand, HTTP mainly provides us with the following functions
I. Content negotiation
Content negotiation mechanism means that the client and the server negotiate the resource content of the response, and then provide the most suitable resource to the client. Content consultation is based on the language, character set, encoding method of the responding resource. A common example is that when you set up a different language on Chrome, the homepage will display different content
Accept, accept-charset, accept-encoding, Accept-language, and Content-language are the criteria for Content negotiation.
For example
The above figure shows the following meanings
- Client: Please give me a resource of type text/*, language en, and preferably encoding type BR (acceptable without Gzip).
- Server: Ok, I found a resource with br (Content-encoding: br) and en (Content-language: en), and its URL is URLe (Content-location: /URLe), you get it and then ask for it.
The mapping between content protocol request headers and corresponding response headers is as follows:
Second, cache management
Cache management is also an important part of the HTTP protocol, and it is important to master.
For resources, because some are not updated for a long time, so there is no need to make network requests to the server for each request. If the first request can be saved locally and the next request can be directly fetched locally, it will undoubtedly be much faster and less pressure on the server.
The request header that involves caching is cache-control. The Cache instruction is one-way, which means that the instructions set in the request are not necessarily included in the response. If the request is not cache-control, the server can return cache-control.
For example, after the client initiates a request, the server returns cache-control: max-age=30, indicating that the client can Cache resources for 30 seconds. Within 30 seconds, the client can directly obtain the request from the Cache. After 30 seconds, the client needs to send a network request to the server.
Max-age is the most commonly used attribute for HTTP cache control. It indicates the maximum duration of the resource storage. Note that the starting point for calculating the time is the creation time of the response packet (the Date field, that is, the time when the client leaves the server).
In addition, there are other attribute values as follows:
No-cache: this field is not allowed to cache, but it can be cached, but must be verified by the server before using it.
No-store: This is a true no-cache, such as seckill pages, so pages that change very frequently are not suitable for caching
Must-revalidate: Once a resource has expired (e.g., beyond max-age), the cache cannot respond to subsequent requests with the resource until it is successfully validated to the original server.
The differences are as follows:
Cache-control can only refresh data, but can not make good use of the Cache, and because the Cache will be invalid, before using the server must verify whether it is the latest version, there is a certain performance stability, so HTTP also introduced conditional Cache.
Conditional requests start with If and have “if-match”, “if-modified-since”, “if-none-match”, “if-range”, and “if-unmodified-since” header fields. The two header fields we use most often are “if-modified-since” and “if-none-match”, so I’ll focus on them.
The if-modified-since: If the requested resource has not been modified since then, a 304 response with no message body will be returned. Last-modified is required for the first request. The second request can be added to the if-modified-since header.
14. if-none-match: conditional request header. For GET and HEAD requests, the server returns the requested resource If and only If no resource on the server has an ETag value that matches the one listed in the header.
Note that there is an ETag return in the figure above, which is short for Entity Tag and is the unique identifier of the resource. It mainly solves the problem that the modification time cannot accurately distinguish the file changes. For example, the file has been modified several times in one second. If you use if-modified-since, you will think that the resource has not changed, and every time a file is Modified, you will change the ETag, so if you are sensitive to resource changes, you should use if-none-match.
Note: Etags are either strong or weak. Strong etags require resources to match exactly at the byte level, while weak etags require resources to have “W/” tags in front of the values, which require only semantically unchanged resources, such as a few Spaces, etc.
Note that both if-modified-since and if-none-match are triggered only after the resource has expired. However, in a development environment, caching can affect our matching. We want every request to be taken from the server. It’s not in cache. What do I do? In this case, a refresh or force refresh is required. Cache-control: max-age=0 in the request header, indicating that the latest resource is required. The browser will not use the local resource and request the server. If the request is forced to refresh, the cache-control header will be added: No-cache also sends requests to the server, usually to refresh and force a refresh effect.
3. Entity head
Because the entity part can transmit text, audio and video, files, etc., it is generally necessary to specify the entity type, content size, coding type, language (English, French) adopted by the entity, so that the responder can understand its content.
Starting with the most important content-type, there are usually several data types
These data types are called MIME types and indicate the Type of the resource. If the requester wants to upload the resource (usually a POST request), it can specify the Type of the resource in the content-type format. If the requester wants to retrieve the resource (a GET request), You can use the Accept header to specify what resource you want, so that when the server finds a matching resource, it can specify the Type of resource to be returned in the Content-Type, which can be parsed by the browser and other clients
As shown in the figure, the client uses Accept: image/ to tell the server that I want to request PNG, JPEG, SVG and other image-type resources. The server returns the image and uses Content-type: image/ PNG to tell the client that the resource Type is PNG. *
The content-type is used to specify the resource, so that the browser and other clients can parse it. If you want to download an image that is intended to be displayed directly in the browser, you can actually download it
As you can see, the content-type returned is application/octet-stream. This Type is the default value of the application file, meaning the unknown application file. The browser will not automatically execute or ask for execution later, but will download it directly. If you want the image to be displayed automatically in the browser instead of being downloaded, the server can specify a content-Type such as image/ PNG, and the browser will automatically parse and display it.
The entity head correspondence of other requester and responder is also given in the content negotiation part above. Please paste the following figure again:
4. Connection management
Connection management is also a very important function, HTTP only because users without awareness, so is a lot of people ignored, actual it is behind the hero, to improve transmission performance played a huge role, and constantly promote the HTTP version yan into one of the important reasons, we can from the point of the various versions of the HTTP connection management under the function of the improvement.
First of all, we know that to establish a reliable connection, the two parties need to go through the TCP three-way handshake, and then the HTTP packet can be transmitted. After the packet is transmitted, the two parties need to go through four waves to disconnect the connection
In the ERA of HTTP 0.9 and 1.0, after sending HTTP packets, the connection was immediately closed, which was called a short link
As you can see, short links are very inefficient, and a lot of time is wasted on meaningless three-way handshakes and four-way waves.
Therefore, HTTP 1.1 has been improved to make it reusable and not immediately closed after each packet is sent. We call such links as long links. As can be seen from the comparison of long links in the figure below, the efficiency of long links has been greatly improved due to the reduction of a large number of meaningless three-way handshakes and four-way waves.
We can specify long links in the header by specifying Connection: keep-alive, which is enabled by default if not specified in HTTP 1.1. If the server supports long links, regardless of whether the client explicitly requests long links, it will return Connection: keep-alive, so that both parties will then use the long Connection to send and receive messages. If the client wants to explicitly close or close, it simply specifies Connection: Close header field.
Long connections greatly improve the transmission efficiency, but a new problem occurs. HTTP packets must be received one at a time. If multiple HTTP packets are sent on the connection, they will be accumulated and processed in a queue in sequence (not in parallel). This is known as queue head blocking
Although HTTP 1.1 has a pipelining mechanism that allows multiple requests to be sent at once, it still waits for the response from the previous request to return before the next request can be processed, so this mechanism is better than nothing.
If we think again about the problem of queue blocking, it is essentially because we can’t differentiate each request, and recall our previous layered model
The above is the packet sent by each layer. Each data link layer packet (to be precise, the link layer packet should be called frame) has a limit on the size of IP datagrams. This size limit is generally called MTU (Maximum Transmission Unit), and the size of TCP datagrams is also limited, which is called MSS (Maximum Transmission Unit).
In other words, each Ethernet packet can transmit a limited amount of application-layer data. If the size of the data to be sent by the upper-layer application layer exceeds the size of the Ethernet packet, the data must be split into several Ethernet packets and then transmitted.
A request is received only after the receiver takes the application-layer data from each packet and assemps it into application-layer data. The response works similarly.
This is also the significance of the existence of the entity header field content-Length. The receiver can judge whether the combined size of several request packets reaches this value by content-Length. If yes, the packet is received and the request can be parsed. Note You also need to receive the request packet until the set value is reached.
** Voiceover, content-Length refers to the entity header, which is only present when transmitting entity data in POST, PUT, etc., but not in GET requests
At the bottom, each request is multiplexed over the same connection, which means that each packet is sent serially.
In HTTP 1.1, it is impossible to distinguish the request to which each packet belongs, so it stipulates that each request can only be processed sequentially. Each request can be processed only after receiving and processing each packet through content-Length. This way, if one request is processed too slowly, it will affect the processing of subsequent requests.
How does HTTP 2.0 handle queue header blocking? Let’s take a look at HTTP 2.0.
The HTTP 2.0 overview
HTTP 2.0 is a great leap forward in performance, and even more remarkable is that it improves while maintaining the same semantics as HTTP 1.1! Such as request method, URI, status code, header field and other concepts are retained unchanged, so as to eliminate the cost of re-learning. In our daily software upgrade, downward compatibility is very important, which is also a premise to promote the large-scale use of products. Otherwise, as soon as you upgrade, all kinds of interfaces and so on will be changed, who dares to upgrade. HTTP 2.0 makes only significant syntactic improvements, completely changing the format of HTTP packet transmission
In syntax, the following transformation is mainly realized
1. Head compression
HTTP 1.1 considers the compression of the body, but not the header. It often sends hundreds or thousands of bytes to the header, but only tens of bytes to the body, which wastes bandwidth. Moreover, we know that since 1.1, the default connection is long, which is used for hundreds or thousands of requests. And many of the request headers are repetitive, resulting in a huge waste of bandwidth! Imagine the following request, which requires such a large number of headers to transmit just a few bytes of “name= Michale”. This is a staggering waste of bandwidth.
So how does HTTP 2.0 work? It developed a special “HPACK” algorithm to build dictionaries on both the client and server side, using index numbers to represent repeated strings, and using Huffman encoding to compress numbers and integers to achieve compression rates of up to 90%
To briefly explain, header compression needs to be performed between HTTP 2.0 enabled clients and servers:
- Maintain a Static table dictionary of common header names and especially common header names and value combinations. This way, if the request response matches the static dictionary, just send the index number
- Maintain the same Dynamic table, which can be added dynamically so that if the client first requests a table due to “user-agent: “XXX”, “host: XXX”, “Cookie” and other dynamic key-value pairs that do not match the static dictionary will still be sent to the server, but the server will create dynamic dictionary entries based on the transmitted key-value pairs, as shown in the “user-agent: “XXX” corresponds to the number 62, “host: XXX” corresponds to the number 63, so that after both parties create dynamic entries, they can communicate with the index number 62, 63 only! Obviously, transmission data sharply reduced, greatly improved transmission efficiency! It is important to note that the dynamic dictionary is maintained by each connection, which means that dynamic key-value pairs must be sent for the first time for each connection
- Support for Huffman Coding based on static Huffman code tables: For content that does not exist in static and dynamic dictionaries, Huffman Coding can be used to reduce the volume. HTTP/2 uses a static Huffman code table (see more), which also needs to be built into the client and server.
Binary format
HTTP 1.1, which is plain text, and 2.0, which is completely binary, moves some of TCP’s features to the application layer, The original Header+Body message is broken up into several smaller binary “frames”, with the “HEADERS” Frame holding the Header DATA and the “DATA” Frame holding the entity DATA
These binary frame only 0, 1, based on this consideration the HTTP 2.0 protocol parsing decided to adopt the binary format, using binary form, though not friendly to man, but greatly facilitate the interpretation of the computer, the original use plain text prone to polysemy, such as case, whitespace, etc., the program must use sophisticated state machine when processing, the efficiency is low, Also trouble. And the use of binary can be strictly defined field size, order, flag bit format, “right is right, wrong is wrong”, no ambiguity parsing, simple implementation, and small size, fast speed.
3. The flow
HTTP 2 defines the concept of a “stream”, which is a bidirectional sequence of binary frames. The data frames (header frames and data frames) that send the same message back and forth are assigned a unique stream ID so that each request can be distinguished. In this virtual stream, data frames are transmitted in sequence, assembled in sequence upon arrival at the responder, and the HTTP 1.1 request headers and entities are parsed.
At the same time, the requester can send a request in the stream, and the responder can also send a response in the stream. Compared to HTTP 1.1, a connection can only handle one request-reply at a time, which greatly improves throughput
As shown, multiple streams in a connection can simultaneously send and receive request-reply data frames, and packets in each stream are transmitted and assembled in sequence
All streams flow in the same connection, which is classic HTTP 2.0 Multiplexing, and since each stream is isolated, whoever handles the request first can send the response to the other person over the connection, eliminating queue blocking.
As shown, in HTTP 2, two requests are sent at the same time and can be received at the same time, whereas in HTTP 1.1 you must wait for the response of one request before processing the next
The HTTP 2 header is blocked
The syntactical changes introduced in HTTP 2, such as streams and frames, have improved efficiency, but there is still queue blocking. Why?
The main reason for this is that HTTP 2’s framing is handled at the application layer, and the framing ultimately passes to the lower TCP layer, which encapsulates the packet and then transmits it, and each connection ultimately transmits the packet sequentially.
As shown in the diagram, flow is just an imaginary concept, and ultimately sequential at the connection level
TCP is a reliable connection, in order to ensure that these packages can order to each other, will be lost package retransmission mechanism, if pass the three bags, two packages after success, but the first packet transmission fails, the TCP stack will place has received a package staging in the buffer zone, stop and wait for the first packet retransmission is successful, so in the case of poor network as long as a package is blocked, Because of the retransmission mechanism, the following packets are blocked, and the upper application can’t get the packets.
Since this is a TCP mechanism that cannot be modified, HTTP 2 header blocking is inevitable. HTTP 3 improves this by replacing TCP with UDP for transmission. Because UDP is disordered, there is no need for disconnection, and there is no dependency between packets, so it fundamentally solves the “queue head blocking”. Of course, these features of UDP are not enough to support reliable communication. Therefore, On the basis of UDP, Google also added TCP connection management, congestion window, flow control and other mechanisms. This protocol is called QUIC protocol.
It can be seen that both HTTP 2 and HTTP 3 support TLS at the bottom and retain HTTPS security features. It is understandable that today, with the rapid development of the Internet, major enterprises are paying more and more attention to communication security.
Summary: FEATURES of HTTP
With that said, let’s briefly summarize the features of HTTP
1. Flexible and extensible
This is arguably the most important feature of HTTP, and is the fundamental reason why HTTP is able to dominate the world and crush other protocols! It only defines the basic formats of packets, such as words separated by Spaces, fields separated by newlines, and basic semantics such as “header+body”. However, it does not impose restrictions on syntax. It does not specify what should be transmitted in the header or what should be used for transmission at the bottom level. This also makes it possible for HTTPS to add an SSL/TLS layer to encrypt traffic, HTTP2 to use frames, streams for multiplexing, and HTTP 3 to use UDP to completely solve queue blocking problems! The bottom layer can also be replaced at any time if there is another great agreement
2. Reliable transmission
No matter TCP or QUIC (UDP is used at the bottom), their transmission is reliable and can ensure the reliable transmission of the application layer’s request and response. This is very important, otherwise the application layer will not be able to parse during the transmission process.
3. Application layer protocol
HTTP is a protocol, and many people confuse HTTP with TCP. As mentioned earlier, TCP is the highway that provides us with a reliable transport channel. HTTP specifies how goods should be represented (header + blank line + body). Whether goods can be shipped back from intermediate stations (caching) whether the highway should be closed (connection control) and how goods can be reliably transported to their destination is a MATTER of TCP, not HTTP, which many people often confuse.
4. Request response mode
HTTP requires the requester to initiate a request, and then the responder responds to it. The responder does not send a response to the requester without reason. In addition, the roles of the request and the responder are interchangeable. In this case, server is the requester and cilent is the responder
5. No status
Each HTTP request – response is irrelevant, namely each ShouFaBao are completely independent, without any connection, the server after receiving each request and response, will not record any information in this request, someone says not ah, why do I add a shopping cart for many times, the shopping cart list can also keep my add goods before? A Session can be thought of as a Session that the Server uses to track the behavior of each user. The Server will assign each user a Session of that Session. It is returned to the client through the Cookie header field, and the client will carry the sessionId in the Cookie for each request. After the server gets the sessionId, it will know which user initiated it
Answer the opening question
Most of the problems have been hidden in the knowledge of this article, there are some problems, let’s take a look
1. Why is anti-crawl a false proposition
Because both normal client request and crawler request must follow THE HTTP protocol, the HTTP packets sent by crawler have no essential difference with normal user request, and the server cannot distinguish them, so the server can only use some tricks, such as finding that the request of a certain IP is particularly frequent within a short period of time, identifying it as crawler and refusing service directly. Or through verification code such as increase the difficulty of the crawler, but cannot completely eliminate, the crawler also have a way to deal with, of course, it can be kept them at the time of request their own IP cheat server in order to achieve the purpose of, can also crack captcha, slay the crawler and the climb is in love of constantly improve the cracking and the means to be cracked.
2. What is the difference between 301 and 302? This is actually mentioned in my previous high performance short chain design, in short chain design, redirection is a point that must be considered
As shown in the figure, after entering A url, it will be redirected to B url. You need to consider whether to use 301 or 302. The difference between the two is as follows
- 301: Represent a permanent redirection, that is to say after the first request to get the long link, next time the browser to request a short chain of words, not to the short url server request, but directly from the browser’s cache, so at the server level will not be able to get to the short url clicks, if the link is just an activity, It is impossible to analyze the effect of this activity. So we generally don’t use 301.
- 302: “Temporary redirection” means that every time a short link is requested, the short url server will be requested (unless cache-Control or Expired indicates the browser Cache in the response). This makes it easier for the server to count hits, so using 302 puts a little more pressure on the server, But it’s worth it today when data is so important, so 302 is recommended!
3. What is the only field that HTTP 1.1 requires to be passed in the request header
HTTP 1.1 allows a server to set up multiple Web sites, that is, a server can Host multiple sites corresponding to the domain name, so you must specify the Host, after arriving at the server to find the corresponding URL to request it.
4. A common misconception: What is the difference between GET and POST
The most common misconceptions are that the POST request is secure, the GET parameter is passed through the URL, and the POST is placed in the request body, etc. These answers do not GET to the point. In fact, both GET and POST can be used to transmit information, and the GET request can use the body to transmit data. In POST requests, you can use urls instead of body to transfer data, and this is all possible, just like you can use ambulances to transport goods, or use trucks to save people, and it’s not a problem, but it’s not in line with people’s cognition, it’s not in line with the semantics of HTTP definition, and there’s no way to do anything without rules. You can communicate more efficiently by following the semantics, so the difference is really only semantic. As for security, that’s HTTPS’s business.
Finally: find three
This article has been written intermittently for two weeks. I hope you can get up for three consecutive days. Thank you for your support!
Shoulders of giants
- Geek time-perspective HTTP protocol
- The illustration of HTTP
- Blog. Konghy. Cn / 2019/08/06 /…
- Aws.amazon.com/cn/route53/…
- Developer.mozilla.org/en-US/docs/…
- Cross domain: blog.csdn.net/github_3470…
- HTTP status code: developer.mozilla.org/zh-CN/docs/…
- Entropymine.com/jason/testb…
- Httpwg.org/specs/rfc75…
- Imququ.com/post/header…
- Developers.google.com/web/fundame…