- HTTP/3: From root to tip
- Original article by Lucas Pardue
- Translation from: The Gold Project
- This article is permalink: github.com/xitu/gold-m…
- Translator: Starrier
- Proofreaders: jerryOnlyZRJ, Kasheemlew, Fengziyin1234
HTTP is the application-layer protocol that ensures the proper functioning of Web applications. In 1991, HTTP/0.9 was officially released, and by 1999, it had developed into the Internet Engineering Task Force (IETF) standardized protocol HTTP/1.1. HTTP/1.1 has worked very well for a long time, but in the face of today’s changing Web requirements, it is clear that a more appropriate protocol is needed. In 2015, HTTP/2 was born. Recently, it was revealed that the IETF is expected to release a new version, HTTP/3. For some, it came as a surprise, and sparked heated debate in the industry. If you don’t pay much attention to the IETF, the arrival of HTTP/3 May seem sudden. But the truth is that HTTP can be traced back to its origins through a number of implementations and Web protocol developments, particularly the QUIC transport protocol.
If you’re not familiar with QUIC, check out some of my colleagues’ high-quality posts. John’s blog discusses some of the problems with HTTP today from different angles, Alessandro’s blog discusses the nature of the transport layer, and Nick’s blog deals with the handling of these tests. We’ve compiled a list of all of them, and if you’d like to see more, head over to cloudflare-quic.com. If you’re interested, be sure to check out our own QUIC open source implementation project, Quiche, written in Rust.
HTTP/3 is the HTTP application map for the QUIC transport layer. The name was formally proposed in the 17th version of the recent (end of October 2018) draft (draft-IETF-quIC-HTTP-17), which was discussed and reached a preliminary consensus at IETF 103 in November. HTTP/3 was formerly known as QUIC (formerly KNOWN as HTTP/2). Before that, we had gQUIC, and before that, we had SPDY. The truth is that HTTP/3 is just a new HTTP syntax for IETF QUIC — UDP-based multiplexing and secure transport.
In this article, we’ll discuss the history behind some of HTTP/3’s previous names, as well as the reasons for the recent name change. We’ll go back to the early days of HTTP and look for fond memories of its growth along the way. If you can’t wait, check out the end of the article, or open the detailed SVG version.
HTTP/3 Layered Model (cake model)
Set the background
Before we focus on HTTP, it’s worth recalling the names of two shared Quics. As we explained earlier, gQUIC usually refers to Google QUIC (protocol origin), and QUIC is often used to refer to an IETF standard (version under development) that is different from gQUIC.
Web requirements have changed since the early 1990s. We have a new version of HTTP that enhances user security in the form of Transport Layer Security (TLS). In this article, we’ll cover TLS. If you’d like to learn more about this area, check out our other high-quality posts.
To help us understand the history of HTTP and TLS, I’ve put together protocol specifications and date details. This information is typically presented as text, such as a list of symbol points that indicate the title of a document, sorted by date. However, because of branch standards, overlapping times and simple lists do not properly express complex relationships. In HTTP, parallel work has led to a refactoring of the core protocol definition, extending the protocol content for new behavior for easier use, and even redefining how the protocol exchanges data over the Internet for better performance. When you try to understand nearly 30 years of Internet history, across different branch workflows, you need to visualize it. So I made a Cloudflare Secure Web Timeline (note: Technically, it’s an evolutionary tree, but the term timeline is better known).
In creating it, AFTER much thought, I chose to focus on the success branch in the IETF. What is not covered in this article is the work of the W3C HTTP NG working group, as well as some fancy ideas from authors keen to explain how to pronounce it: HMURR (pronounced ‘hammer’) and WAKA (pronounced ‘wah-kah’).
To give you a better sense of the context of this article, in the sections that follow, I’ll explain the highlights of HTTP history along this timeline. Learn about standardization and how the IETF treats it. So, we’ll start with a brief overview of the topic before going back to the timeline. Skip this if you’re already familiar with the IETF.
Type of Internet standard
In general, standards define common areas of responsibility, authority, applicability, and other relevant content. Standards come in many shapes and sizes, and can be informal (i.e., de facto) or formal (negotiated/published by standard definition organizations such as IETF, ISO, or MPEG). Standards are used in many fields, and there is even a formal standard for tea — BS 6008.
The early Web uses HTTP and SSL protocol definitions published outside the IETF, which are marked as red lines in secure Web timelines. The compromise of these protocols by clients and services has enabled them to become de facto standards.
Forced into their current form, these protocols were eventually determined to be standardized (for radical reasons described later). Internet standards are usually defined in the IETF, guided by the informal principle of “majority consensus and operating code.” This is based on the experience of developing and deploying projects over the Internet. This is in stark contrast to the “clean room” approach of trying to develop perfect protocols in a vacuum.
IETF Internet standards are commonly referred to as RFCs. This is a complex area to explain, so I recommend reading the post “How to Read an RFC” by Mark Nottingham, chair of the QUIC Working Group. WG, or WG, is more or less just a mailing list.
The IETF meets three times a year and provides time and facilities for all working groups to come in person if they wish. These weeks of travel are packed together, requiring a limited amount of time to delve into advanced technology areas. To address this issue, some working groups have even chosen to hold AD hoc meetings during the IETF’s general sessions. This helps to maintain confidence in normative development. The QUIC Working Group has held several AD hoc meetings since 2017, and the full list can be viewed on its meetings website page.
These IETF meetings also provide opportunities for relevant groups within the IETF, such as the Internet Architecture Board or the Internet Research Task Force. In recent years, an IETF Hackathon has been held a few weeks before an IETF meeting. This provides an opportunity for the community to develop code to run and, more importantly, to interoperate with others. This helps identify issues in the specification that can be discussed in subsequent meetings.
The most important purpose of this blog is to understand that RFCs did not come into being in a vacuum. Apparently, it went through a process that began with the IETF Internet Draft (I-D) format, which was submitted for consideration for adoption. In the case of published specifications, the preparation of I-D may be a simple reformatting attempt. The I-DS will be valid for six months from launch. To keep it alive, new releases are needed. In practice, there are no serious consequences for letting I-D die, and it happens all the time. For those who want to learn about them, they are available on the IETF documentation website.
I-ds is displayed in purple on the secure Web timeline. Each line has a unique name in the format draft-{author name}-{working group}-{topic}-{version}. The workgroup field is optional and can predict whether an IETF workgroup is working here. This is a variable parameter. If I-D is selected, or if I-D is started directly within the IETF, The name is draft-IEtf -{working group}-{topic}-{version}. I-ds can branch, merge or die. Starting with version 00, each new draft is +1. For example, the fourth draft of I-D has version 03. Whenever I-D changes its name, its version number is reset to 00.
Note that anyone can submit an i-D to the IETF; You shouldn’t take these as criteria. But if the IETF’s I-D standardization process is unanimously approved, and it passes the final document review, we will have an RFC. At this stage, the name is changed again. Each RFC has a unique number. For example, RFC 7230. They are shown in blue on the secure Web timeline.
An RFC is an immutable document. This means that changes to the RFC result in a whole new number. Changes can be made to incorporate fixed errors (editorial or technical errors found and reported) or simply refactor the specification to improve the layout. The RFC may discard older versions, or simply update them (material changes).
All IETF documentation is open source at Tools.ietf.org. I personally find IETF Datatracker user-friendly because it provides visualization of document progress from I-D to RFC.
The following example shows the development of RFC 1945 — HTTP/1.0, which provides a clear source of inspiration for a secure Web timeline.
IETF RFC 1945 Datatracker view
Interestingly, in the course of my work, I found that the above visualizations were incorrect. For some reason, it lost draft-IETF-HTTP-V10-spec-05. Since I-D has a life cycle of only six months, there will be disagreements before it becomes an RFC, and in fact draft 05 was still active until August 1996.
Explore the secure Web timeline
With a little understanding of how the Internet Standards document is implemented, we can move on to the secure Network timeline. In this section, there are a number of excerpts showing important parts of the timeline. Each dot corresponds to the date the document or feature is available. For IETF documents, draft numbers are omitted for clarity. But if you want to see all the details, you can check out the full timeline.
HTTP began as the HTTP/0.9 protocol in 1991 and was released in 1994 with I-D Draft-Fielding – HTTP-spec-00. It was quickly adopted by the IETF, resulting in the name draft-IETF-HTTP-V10-spec-00 being changed. Before RFC 1945 — HTTP/1.0 was released in 1996, I-D had already undergone six draft versions of changes.
Even before HTTP/1.0 was complete, HTTP/1.1 started a separate branch. I-d Draft-IETF-HTTP-V11-SPEC-00 was released in November 1995 and officially published as RFC 2068 in 1997. Keen insight reveals that secure Web timelines do not capture the sequence of events, which is an unfortunate side effect of the tools used to generate visualizations. I will try my best to reduce such problems.
The REVISION of HTTP/1.1 began in mid-1997 in the form draft-IETF-HTTP-V11-spec-rev-00. The publication of RFC 2616 in 1999 marked the completion of this project. It wasn’t until 2007 that the IETF HTTP world found peace. We will return to this matter shortly.
History of SSL and TLS
Now let’s look at SSL. We can see that the SSL 2.0 specification was released around 1995, and SSL 3.0 was released in November 1996. Interestingly, SSL 3.0 RFC 6101 was released in August 2011 as a milestone release, usually to “record ideas that were considered and discarded, or protocols that were already historic by the time the decision was made to record them”. With reference to the IETF. In this case, it is advantageous to have an IETF document describing SSL 3.0, as it can be referenced as a specification elsewhere.
We are more interested in how SSL contributed to the development of TLS, the life of which began in Draft-IETF-TLS protocol-00 in November 1996. It passed through six draft versions and was released as RFC 2246 — TLS 1.0 started in 1999.
In 1995 and 1999, the SSL and TLS protocols were used to secure HTTP traffic over the Internet. As a de facto standard, it’s not much of a problem. It was not until January 1998 that the formal standardization process for HTTPS began with the publication of I-D Draft-IETF-TLS – HTTPS -00. This work ended in May 2000 with the release of RFC 2616 — HTTP over TLS.
With the standardization of TLS 1.1 and 1.2, TLS continued to evolve from 2000 to 2007. The development of the next TLS version is still seven years away. It was approved in Draft-IETF-TLS-TLS13-00 in April 2014. After 28 drafts, RFC 8446 — TLS 1.3 was completed in August 2018.
Internet Standardization process
After looking at the timeline, I hope you get a sense of how the IETF works. An overview of how Internet standards are formed is the experimental protocols of researchers or engineers and their specific use cases. They are tested in public and private agreements of varying sizes. This data helps identify areas or problems that can be optimized. This work may have been done to explain the experiment, to gather broader input, or to help the group find other experimenters. Other participants in earlier work may make it the de facto standard; There may eventually be enough reasons to make it an option for formal standardization.
The state of the protocol can be an important factor for organizations that are considering implementing, deploying, or using the protocol in some paradigm. A formal standardization process can make de facto standards more attractive because standardization tends to provide stability. Management and guidance are provided by organizations such as the IETF, which reflect a broader experience. However, it is important to stress that not all formal criteria are successful.
The process of creating the final standard is as important as the standard itself. Getting initial ideas and contributions from people with broader knowledge, experience, and use cases can help produce products that are useful to a broader population. But the process of standardization is not easy. There are pitfalls and obstacles, and sometimes it takes a long process to get rid of the irrelevant stuff.
Each standard definition organization has its own standardization process centered around its corresponding domain and participants. Explaining all the details of how the IETF works is well beyond the scope of this blog. The IETF’s “How We Work” is a good starting point and covers a lot of ground. Yes, the best way to understand is to be involved. As easy as joining an email list or adding a discussion to a relevant GitHub repository.
Cloudflare running code
Cloudflare prides itself on being an early adopter of new and evolving protocols. We adopted new standards early, such as HTTP/2. We also tested some experimental or yet to be finalized features, such as TLS 1.3 and SPDY.
In the IETF standardization process, deploying the running code to a real network of multiple different sites can help us understand how the protocol works in practice. We combine existing expertise with implementation information to improve the code in action and, where meaningful, to correct feedback issues or improvements from working groups to standardize protocols.
Testing new features is not the only priority. As a reformer, you need to know when to push ahead and discard old innovations. Sometimes this involves security-oriented protocols, such as Cloudflare, which disables SSLv3 by default due to a vulnerability in POODLE. In some cases, agreements will be replaced by more advanced ones; Cloudflare disuses SPDY in favor of HTTP/2 support.
The introduction and obsolescence of related protocols are displayed in orange on the secure Web timeline. Vertical dashed lines help to associate Cloudflare events with IETF-related documents. For example, Cloudflare started supporting TLS 1.3 in September 2016, and the last document, RFC 8446, was released nearly two years later in August 2018.
Refactoring HTTPbis
HTTP/1.1 was a very successful protocol, and the timeline shows that the IETF was not active after 1999. The fact is, however, that years of active use of RFC 2616 have provided practical experience in investigating potential problems, but this has also led to some problems with interaction. In addition, RFCS (like 2817 and 2818) extend the protocol. In 2007, it was decided to launch a new campaign to improve the SPECIFICATION of the HTTP protocol, HTTPbis (“bis” comes from the Latin meaning “two,” “two,” or “repeat”), and it adopted a new working group format. The original charter detailed the problem it was trying to solve.
In short, HTTPbis decided to refactor RFC 2616. It will incorporate corrigendum, incorporating some elements of other specifications published during this period. The document will be divided into several sections, which led to the release of six I-Ds in December 2017:
- draft-ietf-httpbis-p1-messaging
- draft-ietf-httpbis-p2-semantics
- draft-ietf-httpbis-p4-conditional
- draft-ietf-httpbis-p5-range
- draft-ietf-httpbis-p6-cache
- draft-ietf-httpbis-p7-auth
The chart shows how the work progressed over the seven-year draft process, with 27 drafts issued before they were finally standardized. In June 2014, the RFC 723X series (x range 0-5) was released. The chair of the HTTPbis Working Group celebrated this achievement with “RFC2616 is Dead”. If it is not clear, these new documents will discard the old RFC 2616.
What does this have to do with HTTP/3?
Despite the busy work of the IETF’s RFC 723X series, the technological advances have not stopped. People continue to enhance, extend, and test HTTP over the Internet. Google has already begun experimenting with a technology called SPDY (pronounced Speedy). This protocol claims to improve Web browsing performance, a use case for using HTTP principles. In late 2009, SPDY V1 was released, followed by SPDY V2 in 2010.
I want to avoid getting into the technical details of SPDY. Because that’s a whole other topic. It is important to understand that SPDY takes the core HTTP paradigm and improves the technology through changes to the interchange format. We can see that HTTP makes a clear distinction between semantics and syntax. Semantics describe the concepts of request and response, including: methods, status codes, header fields (metadata), and body parts (payload). Syntax describes how semantics are mapped to wire bytes.
HTTP/0.9, 1.0, and 1.1 share much of the same semantics. They also share syntax in the form of strings sent over a TCP connection. SPDY uses HTTP/1.1 semantics. The syntax change is to change the string to binary. It’s a very interesting topic, but I’m not going to get into that today.
Google’s experiments with SPDY show that changing THE HTTP syntax is promising and that it makes sense to maintain the existing HTTP semantics. For example, keeping the URL format — https:// — avoids many of the issues that can affect adoption.
After seeing some positive results, the IETF decided to consider HTTP/2.0. Slides from the HTTPbis conference held during IETF 83 in March 2012 show requests, goals, and success criteria. It also explicitly states that “HTTP/2.0 is not compatible with the HTTP/1.x wire format “.
The community is invited to share proposals during this meeting. The I-Ds submitted for consideration included draft-Mbelshe-Httpbis-spdy-00, draft-montenemontene-HTTPbis-speed-mobility-00 and Draft – tarreau – httpbis – network – friendly – 00. In the end, the SPDY draft was adopted, starting with draft-IETF-HTTPbis-HTTP2-00 in November 2012. After 18 drafts over two years, RFC 7540 — HTTP/2 was released in 2015. During this specification, differences in the exact syntax of HTTP/2 resulted in incompatibility between HTTP/2 and SPDY.
The IETF has had a lot of WORK with HTTP over the years, with HTTP/1.1 refactoring going hand in hand with THE standardization of HTTP/2. This is in stark contrast to the calm of the early 2000s. You can check out the full schedule to see the grunt work.
Although HTTP/2 is in the standardization phase, the benefits of using and experimenting with SPDY are clear. Cloudflare introduced support for SPDY in August 2012 and deprecated it in February 2018, and our statistics show that less than 4% of Web customers would still consider using SPDY. At the same time, we introduced SUPPORT for HTTP/2 in December 2015, shortly after the RFC was released, and our analysis showed that meaningful Web clients could take advantage of it.
Web clients that use THE SPDY and HTTP/2 protocols support the preferred security option of TLS. The introduction of Universal SSL in September 2014 helped ensure that all websites that signed up for Cloudflare were able to take advantage of these new protocols.
gQUIC
Between 2012 and 2015, Google continued to experiment and they released SPDY V3 and V3.1. They also started working on gQUIC(which at the time sounded like quick), and in early 2012, published the initial public specification.
Earlier versions of gQUIC used HTTP syntax in the form of SPDY V3. This choice makes sense because HTTP/2 is not yet complete. The SPDY binary syntax is packaged into QUIC packets that can be sent using UDP datagrams. This is different from the TCP transport that HTTP traditionally relies on. When everything is stacked on top of each other, it looks like this:
SPDY gQUIC layered model (cake model)
GQUIC uses clever design to achieve performance optimization. One is to break the clear layering between the application and the transport layer. This also means that gQUIC only supports HTTP. Hence, gQUIC came to be known as “QUIC”. It is a synonym for the next candidate version of HTTP. QUIC has been updated over the years, but we won’t go into much of it, and it’s also understood to be a variant of the original HTTP. Unfortunately, this is why we often get confused when discussing agreements.
GQUIC continued to experiment, eventually choosing a syntax closer to HTTP/2. That’s why it’s called “HTTP/2 over QUIC.” But because of technical limitations, there are some very subtle differences. An example is how HTTP headers are serialized and exchanged. This is a subtle difference, but in practice it means that HTTP/ 2-style gQUIC is not compatible with THE IETF’s HTTP/2.
Last but not least, we always need to consider the security aspects of Internet protocols. GQUIC chose not to use TLS to provide security. Instead, use another method developed by Google called QUIC Crypto. One of the interesting aspects is that there is a new way to speed up the safe handshake. Clients that have previously established secure sessions with the server can reuse information for a “zero-delay round-trip handshake” or 0-RTT handshake. 0-RTT was later incorporated into TLS 1.3.
Now can I tell you what HTTP/3 is?
Of course.
By now, you should know how standardization works, and gQUIC is not unusual. You may also be interested in Google’s spec written in i-D format. In the Draft-TSvWG-Quic-Protocol-00 of June 2015, “QUic: Secure and Reliable UDP based HTTP/2 transport” has been submitted. Keep in mind that what I mentioned earlier is almost always HTTP/2 syntax.
Google has announced that it will hold a Bar BoF IETF 93 conference in Prague. If in doubt, refer to RFC 6771. Hint: BoF stands for Birds of a Feather.
Overall, the result of working with the IETF is that QUIC offers many advantages at the transport layer, and it should be separate from HTTP. Clear separation between layers should be reintroduced. In addition, there’s the priority of returning TLS based handshakes (it’s been running since TLS 1.3, so it’s not too bad, and it combines 0-RTT handshakes).
About a year later, in 2016, a new set of I-D was submitted:
- draft-hamilton-quic-transport-protocol-00
- draft-thomson-quic-tls-00
- draft-iyengar-quic-loss-recovery-00
- draft-shade-quic-http2-mapping-00
Here’s another source of confusion about HTTP and QUIC. Draft-shade – Quic-HTTP2-Mapping-00 is entitled “Semantics of HTTP/2 Using the Quic Transport Protocol” and describes itself as “another semantic mapping of HTTP/2 type Quic”. But this explanation is not correct. HTTP/2 changes the syntax while maintaining the semantics. Also, as I said a long time ago, “HTTP/ 2-style gQUIC” never describes the syntax exactly, so keep this concept in mind.
This IETF version of QUIC is about to become the new transport layer protocol. Because the task is so difficult, the IETF will assess the actual level of interest of the reviewer before first confirming it. As a result, a formal Birds of a Feather meeting was held in Berlin in 2016 during IETF 96. I had the pleasure of attending this conference, and the slides do not do justice. As shown in Adam Roach’s picture, hundreds of people attended the conference. At the end of the meeting, there was a consensus that QUIC would be adopted and standardized by the IETF.
The first IETF QUIC I-D, which maps HTTP to QUIC — draft-IETF-quic-HTTP-00, uses the Ronseal method to simplify the naming — “HTTP over QUIC”. Unfortunately, it doesn’t have the desired effect, and there are lingering instances of HTTP/2 terminology throughout the content. Mike Bishop — new editor of I-D, found and fixed the wrong name for HTTP/2. In draft 01, the description is changed to “A mapping of HTTP Semantics over QUIC”.
Over time and versions, the use of “HTTP/2” has declined, and the instance section is just a reference to the 7540 section of RFC. I-d is now version 16, starting in October 2018 when it was rolled back two years. Although HTTP over QUIC has similar content to HTTP/2, it is always independent (non-backwards compatible HTTP syntax). However, for those who don’t follow the DEVELOPMENT of the IETF closely (and there are a lot of them), they won’t find some subtle differences in the name. One of the priorities of standardization is to help with communication and interoperability. But simple events, like naming, are the main cause of the relative confusion in the community.
Back in 2012, “HTTP/2.0 means that the wire format is incompatible with the HTTP/1.x format.” The IETF follows existing clues. IETF 103 was considered and finally agreed upon, namely: “HTTP over QUIC” named HTTP/3. The Internet is making the world a better place, and we can move on to more important issues.
But RFC 7230 and 7231 disagree with your definition of semantics and syntax!
The title of a document can sometimes be confusing. The syntax and semantics of HTTP documents are described today:
- RFC 7230 — Hypertext Transfer Protocol (HTTP/1.1) : Message syntax and routing
- RFC 7231 — Hypertext Transfer Protocol (HTTP/1.1) : Syntax and context
Reading too much into these names might lead you to think that the core semantics of the HTTP version are specific. For example, HTTP/1.1. But this is a side effect of the HTTP family tree. The good news is that the HTTPbis working group is trying to solve this problem. A few brave members are working on another round of revisions to the document, as Roy Fielding puts it, “One More Time!” . This work is currently being done as a core HTTP task (you may also have heard of moniker HTTPtre or HTTPter; Naming is hard work). This will reduce the six drafts to three:
- HTTP Semantics (Draft-IETF-Httpbis-Semantics)
- HTTP Caching (draft-IEtf-Httpbis-caching)
- HTTP/1.1 Message Syntax and Routing (draft-IEtf-Httpbis-Messaging)
Under this new structure, the syntax of HTTP/2 and HTTP/3 will be clearer for common HTTP definitions. This does not mean that it has features beyond syntax, but it is debatable whether this is a variable in the future.
conclusion
This article provides an overview of how the IETF has standardized HTTP over the past 30 years. Without getting into technical details, I’ll try to explain the historical evolution of HTTP/3. If you skip the good bits section in the middle but want to get the gist of it, HTTP/3 is just a new HTTP syntax for IETF QUIC — a secure transport layer based on UDP multiplexing. There are still a lot of interesting areas to explore, but they will have to wait until the next opportunity.
In the course of this article, we have explored important chapters in HTTP and TLS development, but they are covered separately. At the end of this article, we’ll summarize them all in the complete secure Web timeline described below. You can use it to investigate detailed historical records. For Super Sleuths, be sure to see the full version including draft numbers.
Part this paper statements in the order of to some readers may cause discomfort, or logic, due to partial translations can view the link to the translation process view, if not yet, can read the original English, or through the following methods for feedback, comments and feedback can also be used in the article below, thank you for your attention and support, thank you. At the same time, I would also like to thank the third proofreader of this paper: Fengziyin1234, whose efforts have improved the quality of this paper.
If you find any errors in the translation or other areas that need improvement, you are welcome to revise and PR the translation in the Gold Translation program, and you can also get corresponding bonus points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.
Diggings translation project is a community for translating quality Internet technical articles from diggings English sharing articles. The content covers the fields of Android, iOS, front end, back end, blockchain, products, design, artificial intelligence and so on. For more high-quality translations, please keep paying attention to The Translation Project, official weibo and zhihu column.