“This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge!”
In a real-time application, it goes without saying that information needs to be retrieved from the server as soon as it becomes available. And, basically, the classic HTTP request/response pattern is not up to the job. Because the server will remain silent, regardless of whether there is new data, unless or until the consumer requests an update.
This limitation has led to a variety of hacks and workarounds (some of which have become formal and widely adopted) as developers try to adapt the request/response model to the needs of a more dynamic, real-time Web.
All of these techniques and approaches, from Comet to HTTP long polling, have one thing in common: essentially, they start to create the illusion of truly real-time (event-driven) data exchange/communication, so when the server has some new data, it sends a response.
Although HTTP is not an event-driven protocol, and therefore not truly real-time, these methods are actually very effective in specific use cases, such as Gmail chat. However, problems can arise in low-latency or large-scale applications, mainly because of processing requirements related to HTTP.
That is, with HTTP, you have to constantly request updates (and get responses), which is very resource-intensive: the client establishes a connection -> requests updates -> gets a response from the server, and then closes the connection. Imagine this process being repeated endlessly by thousands of concurrent users, which is a very heavy strain on the server.
It was these issues that eventually led developers Michael Carter and Ian Hickson to develop WebSockets, essentially a thin transport layer built on top of a device’s TCP/IP stack. The aim is to provide what is essentially a TCP communication layer for Web applications, which is as close to native as possible, forbids some abstractions to eliminate some security-based complexity and other issues.
This article looks at some of the techniques used to circumvent the limitations of HTTP request/response patterns in real-time applications, some of the problems associated with each, and how WebSockets can help overcome these problems.
HTTP
HTTP, essentially a request/response protocol in the client-server computing model, is the main communication mode on the World Wide Web. The original version, proposed by Tim Berners-Lee in 1989 as an application protocol, was very limited and was quickly modified to support a wider range of browser and server capabilities.
Although HTTP/1.0 was not considered a formal specification or Internet standard, these changes were finally documented by the HTTP Working Group in 1996 as HTTP/1.0 (RFC 1945).
HTTP / 1.1
The arrival of HTTP/1.1, the most widely supported version in Web browsers and servers, is a big step forward, as it implements some very important optimizations and enhancements, from persistence and plumbing to new request/response header fields. Chief among them are two headers that are the basis for many of the improvements that help make the real-time WEB more dynamic:
Keep-alive header: Used to set up persistent communication between hosts. Means that the connection can be repeated for multiple requests, which significantly reduces request latency, so that the client does not need to renegotiate the TCP handshake (3-way handshake) connection after sending the first request. Another positive side effect is that connections get faster over time due to TCP’s slow start mechanism. Prior to HTTP/1.1, a new connection had to be opened for each request/response pair.
Upgrade header: Used to Upgrade connections to enhanced protocol mode (such as WebSockets).
HTTP polling
HTTP polling represents an advance on the classic request/response mechanism, and although polling has many versions, only long polling is suitable for real-time WEB applications in any case.
For example, HTTP short polling uses Ajax-based timers to ensure that client devices send server requests at fixed intervals. However, the server still responds to each request immediately, either providing new data or sending an “empty” response in the absence of new data before closing the connection. Therefore, this is really not much use in real-time applications when the client needs to respond immediately as new data becomes available.
It was this limitation that led to the development of HTTP long polling, which is essentially a technique designed to simulate server push functionality.
Essentially long polling is a technique in which the server chooses to keep the client connected for as long as possible (typically 20 seconds), only after any data passes the response becomes available or a timeout threshold is reached.
The main advantage of long polling is that, in theory, new information is sent to the client as soon as it becomes available. The downside, however, is the additional overhead of handling HTTP requests.
HTTP streaming media
HTTP streaming is a push data transfer technique that allows a Web server to continuously send data to a client over a single HTTP connection that remains open indefinitely. Essentially, the client makes an HTTP request and the server pushes back a response of uncertain length.
However, while HTTP streaming performs well, is easy to use, and is an alternative to WebSockets, it has limitations. From a real-time perspective, the main problem is that a mediation can interrupt a connection (either through timeouts or simply because it serves multiple requests in a “circular manner”), so it does not always guarantee real-time.
HTTP / 2.0
HTTP/2.0 evolved from SPDY, an experimental protocol originally announced by Google in 2009. By 2015, the HTTP Working Group had published HTTP/2.0 as a proposed standard, with the SPDY specification as a starting point.
It is essentially a performance update designed to speed up Web communications, with two main utility features:
-
Multiplexing: Instead of transmitting data in plaintext format, data is encoded as binary and encapsulated in frames that can be multiplexed along bidirectional channels called streams, all over a single TCP connection. This allows many parallel requests/responses to occur simultaneously
-
Server push: Server push is a performance feature that allows a server to send a response to an HTTP/2 compliant client before the client requests a response. This feature is useful when the server knows that the client needs to “push” the response to fully process the original request.
Despite these advances, the explosion of Internet traffic today due to the heavy use of mobile devices makes it difficult for HTTP/2.0 to provide a smooth and transparent Web browsing experience, especially in the context of the growing demands of real-time applications and their users.
advantages
-
By installing an SSL certificate, all browsers support the HTTPS based HTTP/2 protocol.
-
HTTP/2 allows clients to send all requests concurrently over a single TCP connection, which in theory allows clients to load resources faster.
-
TCP is a reliable and stable connection protocol.
disadvantages
-
Concurrent requests add load to the server. The HTTP/2 server can receive requests in bulk, which can cause requests to time out. The problem of server load spikes can be addressed by using load balancers or proxy servers to limit forward requests.
-
Server support for HTTP/2 priority is not yet mature. Software support is still evolving, and some CTNS or load balancers may not support priorities correctly.
-
The HTTP/2 push feature can be difficult to implement correctly.
-
HTTP/2 addresses HTTP start-end blocking, but TCP-level blocking still causes problems.
HTTP / 3.0
HTTP/3.0 is a new iteration of HTTP that has been in development since 2018, and although it is still a draft standard, some browsers already support it, such as Chrome.
The goal of HTTP/3 is to provide fast, reliable, and secure Web connections on all forms of devices by addressing HTTP/2’s transport-related issues. To do this, it uses a different transport-layer network protocol called QUIC, which runs on user Datagram Protocol (UDP) rather than TCP as other earlier versions do.
However, some potential problems with HTTP/3 have started to emerge, such as:
-
Transport Layer impact: The transition to HTTP/3 involves not only changes at the application layer, but also changes at the underlying transport layer. As a result, adopting HTTP/3 is more challenging than its predecessor.
-
Reliability and data integrity issues: UDP is generally suitable for applications that accept packet loss because UDP does not guarantee that packets will arrive in sequence. In fact, it does not guarantee that packets will arrive, so if data integrity is important to the application instance and HTTP/3 is used, mechanisms will have to be built to ensure that messages are sorted and arrive intact.
advantages
-
The introduction of a new (different) transport protocol, QUIC, running on UDP means less latency in theory and in current experiments.
-
Because UDP does not perform error checking and correction in the protocol stack, it is suitable for application scenarios where these are not required or are performed in an application. UDP is typically used in time-sensitive applications, such as real-time systems, which cannot wait for packets to be retransmitted and can tolerate some discarded packets.
disadvantages
-
Transport Layer impact: The transition to HTTP/3 involves not only changes at the application layer, but also changes at the underlying transport layer. As a result, adopting HTTP/3 is more challenging than its predecessor.
-
Reliability issues: UDP applications often lack reliability, and it must be acknowledged that some degree of packet loss, reordering, error, or duplication occurs. Any necessary handshakes, such as real-time confirmation that a message has been received, are provided by the end user application.
-
HTTP/3 is not fully standardized.
WebSockets
For more information on WebSockets, see WebSockets Concepts and Practices in Depth.
WebSockets allow servers and clients to push messages at any time, regardless of the previous request. One significant advantage of using WebSockets is that almost every browser supports WebSockets.
WebSocket solves some HTTP problems:
-
Bidirectional protocol: Client/server can send messages to each other (in HTTP, requests are always initiated by the client and responses are handled by the server)
-
Full-duplex communication: Client and server can communicate with each other simultaneously and independently.
-
Single TCP connection: After the initial HTTP connection upgrade, the client and server communicate over the same TCP connection (persistent connection) for the entire life of the WebSocket connection, saving server resources and bandwidth.
advantages
-
WebSocket is an event-driven protocol, which means it can be used for true real-time communication. Unlike HTTP, where updates must be requested continuously, with WebSockets, updates are sent as soon as they are available.
-
WebSockets keep single persistent connections open while eliminating the latency problems associated with HTTP-based request/response methods.
-
WebSockets generally don’t use XMLHttpRequest, so each time you need more information from the server, you don’t send a header. This in turn reduces the load of data sent to the server.
disadvantages
-
WebSockets do not automatically recover when a connection terminates. This is a mechanism that needs to be implemented in application development and is one of the reasons there are so many open source libraries on the client side.
-
Browsers prior to 2011 did not support WebSocket connections, which can now be ignored.
conclusion
Generally, in the context of real-time, continuous communication, WebSockets are a better choice.
Http-based technologies tend to take up more resources on the server, while WebSockets take up very little space on the server. Also, methods such as long polling require multiple hops between the server and the device, and these gateways often have different limits on how long the connection is allowed to remain open.
WebSockets should be the preferred way to build projects that require long connections, constant updates, and real-time data interaction.