In order to ensure the quality of audio and video, WebRTC has done a lot of work, especially network transmission and service quality, which is its core technology. This article is compiled from the speech shared by Li Chao, chief architect of Beijing Audio and Video Beat Technology Co., LTD., on LiveVideoStack. The contradiction between network quality, real-time transmission and service quality and the way of balance between WebRTC underlying technology and optimization are analyzed in detail.

The author | li chao

Finishing | LiveVideoStack

I am very pleased to discuss with you how WebRTC transmission ensures the quality of audio and video services.

In this share, I will introduce to you how WebRTC transmission guarantees the quality of audio and video services from four aspects. First, the target of real-time communication. First, we need to determine the target of real-time communication, so as to know what kind of system to make real-time communication and what kind of real-time capability to ensure. Second, WebRTC how to ensure the real-time data transmission; Third, there may be contradictions between network and service quality if real-time transmission is to be satisfied; Finally, how WebRTC resolves the contradiction between network and service quality.

1. Objective of real-time communication

1.1 What is the objective of real-time communication?

First, do you prefer to have meetings in the office or online? Second, if there is a concert, would you like to go to the live? Or would you rather listen online?

1.2 Reasons for the difference between online and now

I believe that more people will choose offline, because online and offline feel different. Its difference lies in: first, the camera and the human eye see the effect is not the same, for example, the camera acquisition Angle is too small, can not take some angles of the picture; Secondly, the quality of the acquisition equipment is uneven. Some of the equipment used in a meeting is high definition and some are fuzzy. Finally, and most importantly, the atmosphere of the scene cannot be captured by the camera. Everyone has his or her own aura. When people gather together, the atmosphere is very warm, but you can’t feel it through the screen.

1.3 Objectives of real-time communication

According to the above points, we can conclude that the ultimate goal of real-time communication is to approximate or achieve the effect of face-to-face communication as much as possible. As things stand, it is almost impossible to surpass the effect of face-to-face communication.

2. A few key indicators

2.1 Several important indicators

So how to achieve the effect of face-to-face communication, there are several important indicators involved.

The most critical is the delay indicator of real-time communication. Only by making clear the delay indicator, can we know what kind of delay can meet the requirements in real-time communication, that is, close to the effect of face-to-face communication. The second is the audio and video service quality index. After the delay index is reached, the quality of audio and video service is judged according to this index.

2.2 Real-time communication delay

Let’s take a look at the classification criteria for delay indicators. As can be seen from the table in the figure, if the end-to-end delay is less than 200ms, the whole call is of high quality and the effect of the call is just like chatting in the same room. Within 300ms, most people were satisfied. Within 400ms, a small number of people could feel the delay, but the interaction was not affected. Above 500ms, the delay will significantly affect the interaction and most people are not satisfied.

Therefore, the most critical level is 500ms. Only when the delay is lower than 500ms can it be said to be qualified real-time interactive system.

2.3 Audio service quality indicators

Next up is the Audio service quality indicator, which scores based on MOS. 4.0-5.0 is “excellent”, the evaluation standard is very clear listening, small delay, smooth communication; 3.5-4.0 is “good”, the sound quality is slightly poor, can hear clearly, the delay is small, a little noise; 3.0-3.5 is “medium”, the sound quality is better, can hear clearly, there is a certain delay, can communicate; 1.5-3.0 is “poor”, barely able to hear, communication needs to repeat many times to be able to articulate clearly; 0-1.5 is “bad”, completely inaudible, large delay, poor communication.

2.4 Video service quality indicators

There are several evaluation criteria for video service quality, and they are all judged by MOS score. The reference in the figure is the standard evaluation index based on the size of the bit stream. Take 640*480 as an example, if you want to achieve a high quality MOS value of 4.5, you can see that the size of the generated bit stream is around 3Mbps. This bit stream is too big for real time, and if 640×480 video takes up 3Mbps, it’s a luxury. Generally, we will choose a bit stream with a MOS of 3.5 (green line), which has a bit stream range of about 600kbps.

It can be seen from the above that when the real-time transmission is guaranteed, certain service quality may be sacrificed due to the certain bandwidth.

3. Principal Contradiction

3.1 Contradiction between real-time communication and service quality

By understanding the above three indicators, we can get the main contradiction between real-time communication and quality of service.

First, the contradiction between the bit stream and the bandwidth. To achieve good quality, the bit stream will generally be large (of course, cannot exceed the maximum bit stream), and the bandwidth is limited, so there will be a contradiction between the bit stream and the bandwidth; Second, the contradiction between real-time and service quality. Generally, in order to ensure good real-time performance, we will choose UDP, but UDP does not guarantee the reliability of network transmission, packet loss and out-of-order are often occurred. Once there is packet loss, out of order, network transmission quality can not be guaranteed, will eventually affect the quality of audio and video.

Here we can summarize the main contradiction of real-time communication, namely: there are contradictions between audio and video quality and bandwidth size, real-time and network quality, and other issues including 3A are secondary contradictions.

4. Contradiction resolution

4.1 Ways to resolve contradictions

Now let’s look at the solution to the conflict. For WebRTC, the main contradictions are mainly solved from the following aspects: how to guarantee the real-time data transmission, how to improve the network quality, how to evaluate the bandwidth more accurately, and how to balance the bit stream and bandwidth.

5. Ensure real-time data

For WebRTC, in order to ensure the real-time performance of data, two methods are provided. One is the selection of transmission path. It first selects the best transmission path, so that the best and shortest transmission path is adopted during end-to-end transmission to ensure the real-time performance of data transmission. The other is the choice of transport protocol, which can be TCP or UDP. Let’s first take a look at how WebRTC selects the best transmission path.

5.1 Select the Best Path

The diagram shows the architecture of WebRTC path selection. The figure includes three ends, A end, B end and C end. A and B are in the same LAN. For WebRTC, if it finds that both ends of the same LAN need to communicate, it will choose direct connection within the LAN to ensure the shortest and optimal network path. If the communication between A and C is not on the same LAN, then WebRTC will choose P2P direct connection and perform NAT traversal. If the traversal succeeds, direct connection can be carried out. In this way, the path is shorter than that of server transit. Server transfer is selected only when P2P fails. As can be seen from the figure, when one end transfers data to the other end through the TURN server, its transmission path is significantly longer than that of P2P direct connection, so for WebRTC, it will definitely choose the shortest and optimal path to ensure end-to-end real-time transmission.

5.2 Using TCP or UDP?

Next, look at WebRTC’s choice of TCP/UDP protocols. Both TCP and UDP can be used for live transmission when the network is better, but for the most part, we prefer UDP (the advantages of UDP are described later); TCP cannot be used on weak networks. For network traversal, TCP has advantages. An enterprise can use TCP to access port 80 of the extranet for traversal.

5.3 Why Cannot TCP Be Used on Extreme Networks?

Why can’t TCP be used in weak network environment? This is due to the mechanism of TCP. The mechanism of TCP includes send, acknowledgement, packet loss, and retransmission. Normally, there is no problem transferring data from one end to the other, but there is a big problem when packet loss occurs.

The figure shows the delay of multiple packet loss: sending packets from the client to the server, the server needs to return ACK messages for confirmation; After the client receives the acknowledgement message, it can continue to send further data (the same is true with sliding Windows). Each time the client finishes sending data, a timer is started. The minimum timeout period of the timer is 200ms. If, for some reason, the client does not receive an ACK packet within 200 milliseconds, the client resends the last ACK packet. Because TCP has a backoff mechanism to prevent the loss of packets from being sent frequently, it extends the timeout period for resending packets to 400ms. If the resend packet still does not receive an acknowledgement message, the timeout period for the next resend is extended to 800ms. As we can see, after several consecutive packet losses, there will be a very large delay, which is the root reason why TCP cannot be used in weak network environment.

5.4 Problems Caused by selecting UDP

Due to the mechanism of TCP, we usually choose UDP to ensure the real-time transmission of audio and video. UDP has advantages in real-time, but its disadvantages are equally obvious. Because UDP is unreliable transmission, it can only try to deliver, so the loss of packets, out of order is a common thing, but for network quality, loss of packets is a very serious thing, which we need to deal with this problem. Let’s take a look at how WebRTC solves this problem.

6 How can I Improve network Quality

6.1 What are network Quality Indicators

So, how does WebRTC handle UDP network quality?

To solve the problem of network quality, it is necessary to know several factors that affect network quality: it includes packet loss rate, delay time, jitter, and out-of-order. If the network has low packet loss rate, low latency, no jitter, and no out-of-order, this is a very good network. But if the packet loss rate is high, the network quality will be poor.

6.2 Causes of Packet Loss

The figure shows the basic topology of the network. There are many reasons for packet loss, such as poor link quality. When the mobile phone is connected to the base station, packet loss will be caused by poor signal, which is called link poor. Second, the bandwidth is full. For example, the upstream transmission bit rate of a machine is relatively high while the downstream receiving link is relatively small. At this time, the upstream router will cache the data and send it slowly. The third is active packet loss. For example, the route is cross-carrier. When data is transmitted between different carriers, packet loss may be caused by unknown reasons of the carriers. Fourth, the light was cut off and other accidental reasons caused by packet loss.

6.3 Methods to Reduce Packet Loss

WebRTC solves packet loss in two main ways: NACK and FEC.

6.4 NACK

NACK is used for packet retransmission. As you can see from the figure, the sender of WebRTC keeps sending RTP packets to the receiver, and the receiver collects statistics on the packet loss during a short period of time. If a packet is lost, it sends a NACK message back to the sender. The NACK message records which packets have been lost during this period of time. After receiving the NACK, the sender finds the lost packet in the previous sending history and resends the packet.

6.5 NACK Applicable scenarios

Of course, a certain delay will be generated through NACK retransmission, which includes the time to wait for NACK to be sent (10 or 20ms), the network delay of NACK passing through the network, the network delay of RTP and the network delay of RTP retransmission, that is, 1.5RTT+10 or 20ms. According to this formula, if the RTT delay is relatively large, such as 200ms, then 1.5RTT is 300ms. According to the real-time transmission delay index described above, the end-to-end real-time transmission delay should be controlled within 500ms. If the network transmission of data alone accounts for 300ms, then the data will be collected, encoded, decodes, rendered and other processes, and the processing time will probably exceed 500ms.

Therefore, it can be concluded that packet loss retransmission is only suitable for network transmission delay is relatively small, if RTT is relatively large, it is not suitable to use packet loss retransmission to ensure network quality.

6.6 FEC

FEC is used to eliminate packet loss through redundant data. In effect, it is an XOR operation. As shown in the figure, suppose the transmitted data are Data1 and Data2. If the two data are not protected by FEC in the transmission process, one of the data is lost, and it can only be recovered by NACK. Then, can we add some redundant data in the transmission process to ensure that when a data is lost during reception, the lost packet can be retrieved without retransmission? This is the FEC.

The last digit 0 of Data1 and the last digit 0 of Data2 are 0, the penultimate digit 1 of Data1 and the penultimate digit 1 of Data2 are 0, and so on. The result is redundant data R, transmitting three packets from one end to the other at the same time. During transmission, if Data1 is lost, Data2 and redundant packet R can be used to retrieve the Data1. The algorithm for retrieving the packet is also xOR operation, that is, each bit of Data2 is xor operation with the same bit in the redundant packet at the receiving end to calculate Data1. This ensures that the lost packet can be retrieved without a new request.

In addition, xOR is transitive. Packets A, B and C can xor and get D at the same time. If any package is lost, the lost package can be retrieved through D and other packages.

6.7 ULPFEC

For WebRTC, the default is ULPFEC. The principle is that the packets to be transmitted are first grouped. For example, three packets are divided into a group, and then a redundant packet is generated for the group. If a packet in the group is lost, it can be retrieved through xOR operation between the redundant packet and other packets. From the first row you can see that the 1 and 2 are there, the 3 is missing, and you can retrieve the 3 by R1, and you can retrieve the 9 in the third row. The disadvantage is that if two consecutive packets are lost, the algorithm will fail. For example, after the second row 4 and 5 are lost, they cannot be recovered by 6 and R2.

6.8 FlexFEC

The result is the improved FlexFEC, which does two-way redundancy, not only horizontally but also vertically.

At this time, when 4 and 5 are lost at the same time, 4 can be found through 1, 7 and C1, and 5 can be found through 2, 8 and C2. In this way, two consecutive lost packets can be retrieved. The downside, of course, is that it can’t handle continuous packet loss in batches, such as 10 packets lost in a row, which FlexFEC can’t do anything about.

The above is the solution of WebRTC for packet loss, through “NACK+FEC” to prevent packet loss.

6.9 How Do I Resolve The Jitter and Disorder

Let’s talk about shaking and disordering. Jitter means that one moment a lot of packets come, another moment none, packets come in waves, the arrival time of packets is very uneven; Out of order means that the first packet arrives later, and the last packet arrives first.

WebRTC handles jitter and out-of-order using the JitterBuffer and NetEQ. The JitterBuffer is used to process video packets and the NetEQ is used to process audio packets. They work in much the same way (NetEQ is a bit more complicated), buffering incoming data in a queue (the buffer area) and then pulling packets from the other end of the queue one by one, so that the data coming out is smooth.

As shown in the figure, when each RTP packet comes in, it has a Sequence Number. When the data enters the queue, it inserts the data into the corresponding position according to the Sequence Number. For example, in the figure, packets 104 and 107 have arrived at the corresponding position, while 103, 105 and 106 have not arrived. The positions are left empty, and when they come, they are inserted into the corresponding positions. This prevents the disorder, so the JitterBuffer and NetEQ resolve both disorder and jitter.

In summary, NACK and FEC solve the problem of packet loss. NACK increases the delay and FEC occupies the bandwidth. JitterBuffer solves the video disorder and jitter, and NetEQ solves the audio disorder and jitter.

6.10 Causes of Network Delay

When it comes to latency, it actually has a lot to do with bandwidth estimation. There are two reasons for the delay. First, the link problem. On the normal network, the transmission of packets is always fast and slow. Second, there is network congestion. When there is congestion, the packets will be buffered, which will cause delay, and when the buffer overflow, there is packet loss.

Therefore, for the delay, we need to solve the delay caused by congestion, the link problem cannot be solved. Here’s a look at how WebRTC prevents congestion.

7 Accurate bandwidth evaluation method

7.1 How Do I Resolve Jitter and Out-of-order

The foundation of WebRTC congestion prevention is an accurate bandwidth evaluation method. It provides two bandwidth evaluation methods, one based on packet loss and the other based on delay. The delame-based evaluation methods are divided into receiver (GOOG-REMB) and sender (GOOG-TCC) bandwidth evaluation methods, and gooG-TCC method is adopted by default at present, because it is more accurate.

7.2 Bandwidth Evaluation based on packet loss

The packet lost-based bandwidth evaluation method is relatively simple, which is calculated according to the packet loss rate. If the packet loss rate is less than 2%, it is a normal packet loss with good network quality, indicating that the bandwidth has not reached the upper limit. Therefore, you need to increase the estimated bandwidth value. Such as your home, for example, bandwidth is 8 m, WebRTC initially is the bandwidth of the true don’t know your home, it must be a little bit of measurement, so at first it first to your bandwidth setting a hypothetical value, which is 500 k, when found the packet loss rate is very low, it is to increase the bandwidth of the evaluation values, such as rose from 500 k to 1 million, if the packet loss rate is very low, It’s going to be 1.5, 2… , the estimated bandwidth value increases by 8% each time; If the packet loss rate is greater than 10%, congestion occurs. In this case, reduce the bandwidth immediately, as shown in the formula (Loss >0.1). If the packet loss rate is less than 10%, it indicates that the current bandwidth evaluation is accurate. In this case, maintain the bandwidth without increasing or decreasing.

7.3 Bandwidth Evaluation based on delay

Delay-based bandwidth estimation is better than packet loss-based estimation because it can predict in advance whether congestion is occurring. If the loss rate exceeds 10%, congestion may have occurred. Once the network is congested, it will take some time for the network to recover, and this time will affect the service quality of audio and video.

This is not the case with delay-based bandwidth assessments. Its basic principle is that if the network transmission delay of the received packets continues to increase, it indicates that the network is getting worse. When it reaches a certain point, the estimated bandwidth value should be lowered to prevent network congestion. The calculation formula is based on the state machine (the state machine is complicated, I won’t go into it here). When the state is very good, you need to increase the bandwidth by 8% each time, the same as packet loss. If the delay keeps accumulating, you need to reduce the bandwidth to 85%. In other cases, the current bandwidth remains unchanged.

8 Balance media data and bandwidth

8.1 Balance between Media data and Bandwidth

When the bandwidth estimate is accurate, it is easy to control. Next, let’s look at how WebRTC balances media data and bandwidth.

Bandwidth evaluation methods and network quality improvements have been described earlier. Under the limited bandwidth, how to provide better audio and video service quality is the goal that people have been pursuing diligently. Therefore, under the same conditions, data can be compressed to a smaller size, has been a key method to solve the quality of service. At present, the most commonly used video encoder is still H.264, but the new encoder has been a big break VP9/H265, AV1/H266 provide a higher compression rate, which allows us to transmit more data in the case of limited network conditions to ensure better quality of service.

On the other hand, dynamic bit rate can be used when the bandwidth is the same and the bit stream cannot be compressed. Often, when using dynamic bit rates, we can see directly from the product, and you’ll see that the video is alternately clear and blurry. That is, when the bandwidth is small, the encoder compresses the bit stream, and the video becomes blurred. When the bandwidth is high, the encoder amplifies the bit stream, so the video becomes clear. The above is to ensure the quality of real-time communication by reducing the amount of data.

8.2 the Simulcast and SVC

In addition, quality issues can be addressed through Simulcast or SVC. Simulcast and SVC solve the problem of the same idea, they will increase the transmission of code stream at the sending end, the data is first sent to the server, and then by the server according to the different bandwidth of the receiving end, select the appropriate code stream to send. For users with poor networks, low definition bit streams are transmitted; for users with good networks, high definition bit streams are transmitted. So these two technologies have very high requirements for the bandwidth and quality of the sender.

The biggest difference between SVC and Simulcast is that SVC uploads one bit stream, but this bit stream is composed of multiple layers. The server selects different layers based on the bandwidth of the receiving end. As is shown in the figure above, when the bandwidth of the mobile end is small, it transmits a small layer of data. When the bandwidth of the PC end is large, it transmits all the layers. Simulcast uploads are multi-channel streams, which are generally divided into small, medium and large channels. The one with the small transmission to the mobile phone and the one with the largest transmission to the PC. The advantage of Simulcast is that each stream is independent, so hardware codecs can be used for each stream, whereas the hierarchical approach of SVC currently has no hardware support, so hardware acceleration is not possible.

8.3 the flow control

When the bandwidth evaluation is accurate, if the bit stream sent is still larger than the bandwidth size, you need to control the flow control. The function of flow control is to reduce the transmit bit rate when the output bit stream is larger than the bandwidth to prevent congestion. Of course it will lead to an increase in delay. In fact, for flow control, it needs to control two points: the first point is Pacer, which reduces the transmit bit rate. Of course, reducing the transmit bit rate is not enough, because if the encoder still outputs a large number of bits to the Pacer, sooner or later the Buffer of the Pacer will burst. Therefore, while controlling Pacer to reduce the transmitting bit rate, the output bit rate of the audio and video encoder must be reduced to maintain balance and make the data fall gently.

As I said before, although flow control prevents network congestion, it will increase some delay, which will eventually be reflected in the overall indicators of real-time communication, and the total delay must be controlled within 500ms. For example, the end-to-end delay used to be 200ms. Due to insufficient bandwidth, the delay can be increased to 300ms or 400ms, but the delay must not exceed 500ms.

In addition, for the output bit stream of the encoder, if the flow control can not match the bandwidth by directly reducing the bit stream, it can also reduce the bit stream by reducing the resolution. In short, when bandwidth is insufficient, try to reduce the amount of data. If not, you can turn off the video and only keep the audio to ensure that the network is smooth.

9 summary

To sum up, in terms of service quality assurance, the first step is to improve the network quality. NACK and FEC solve the packet loss problem, JitterBuffer solve the disorder and jitter of video, NetEQ solve the disorder and jitter of audio. The bandwidth evaluation passed GOOG-REMB and GOOG-TCC, as well as packet loss bandwidth evaluation; In order to ensure real-time performance, better routes should be selected, such as better route nodes to ensure cloud network bandwidth when the client communicates with the server. From the business, reduce the amount of data can be used AV1, SVC, Simulcast, dynamic bit rate, reduce the business; In terms of anti-congestion, Pacer can be used for flow control, as long as it can be controlled within 500ms, appropriately increasing the delay can also be received.

That’s all for this share. Thank you!

Q&A (part)

1. Is the path selection automatically selected by WebRTC?

It’s automatically selected. WebRTC will automatically determine whether the two communication parties are in the same LAN, if so, directly establish a connection in the LAN. If not, they obtain their extranet addresses through STUN protocol, and then perform NAT traversal. If it is not successful, the TURN service will be selected for data transfer.

2. What are the indicators of WebRTC network transmission quality?

When measuring any real-time transmission system, the first thing to see is whether its delay is within 500ms. In fact, 500ms is also a relatively harsh standard for real-time communication, because the network changes very much, so it is quite difficult to achieve this index. The second is the packet loss rate, which is a very key indicator, just said 2% packet loss rate means that the network is better; Less than 10%, representing the current bandwidth is accurate for WebRTC; More than 10% indicates congestion. Some manufacturers say that its products can be resistant to xx % of packet loss, the premise is don’t think the packet loss is an indicator, but in the real network, when after routing buffer is maxed, will appear a large number of packet loss, if not the packet loss as a index, is the lack of a kind of judging the condition of network congestion, this is obviously unreasonable.

3. How does the video JitterBuffer control smooth?

The JitterBuffer smoothing process is not as difficult as we might expect, but it may be due to additional factors such as the need to deal with audio and video synchronization. For smoothing, we can do it ourselves with a Buffer. Buffers can be dynamic or fixed size. To simplify, let’s assume that it is a fixed size, such as defining an array that can hold 100 elements, and taking a package every 10 milliseconds at one end of the array. This is the simplest smoothing process. A better approach would be to dynamically change the size of the smooth array as the network changes, which would be more advanced. Of course, calculating the dynamic size of a smooth array is a little more difficult if the Buffer is dynamically variable.

4. What’s a good solution for WebRTC to communicate with SIP clients?

Generally, it is best to communicate with SIP through a streaming media server such as Janus, which supports both SIP protocol and WebRTC client. In this way, the SIP terminal can transmit data to the streaming media server and then forward the data to the WebRTC terminal. Similarly, the WebRTC terminal can communicate with the SIP terminal through the streaming media server.

5. Are both FEC and NACK enabled by default?

Yes. For WebRTC, FEC and NACK are both on and can be switched on and off.

6. Can you tell us why TCC is more accurate than REMB?

There are two main differences between TCC and REMB. First, the calculation ends are different. REMB is calculated at the receiving end, and then the receiving end returns the result to the sending end for control. When the result is sent back, the network may have new changes, which causes the timeliness of REMB is not enough. TCC gives all the data to the sender for calculation and control, so it is more timely and accurate. Secondly, the filter is different. REMB is Kalman filter and TCC is least square filter (Trend line). The least square filter is better than the Kalman filter in evaluating the network delay.

7. What can P2P do to minimize the delay in an Intranet environment? Can the minimum delay of laboratory environment be less than 100ms?

In the same LAN, the actual delay is only tens of milliseconds. Some students may wonder, some products in the same LAN latency is very small, why use WebRTC but the delay increased? This is because WebRTC internally, in order to ensure the quality of the network, through various mechanisms, various buffers, to do so. So it has to produce a certain amount of delay, which is delay for mass. And in the LAN, the network basically has no delay, no packet loss, no jitter, no disorder. At this time, no policy is adopted, and the network transmission is the fastest. Therefore, in the Intranet communication, the real-time performance of WebRTC is not as good as that of the product with no policy.

8. What is the difference between ULPFEC and FLEXFEC?

ULPFEC supports only unidirectional redundancy, while FLEXFEC supports bidirectional redundancy, that is, horizontal and vertical redundancy. Therefore, it has better packet loss resistance than ULPFEC and occupies more bandwidth than ULPFEC.

9. Reliability: does WebRTC on UDP encapsulate SEQ? Then, if ack retransmission is required, how is it different from TCP SACK?

WebRTC uses the RTP protocol to transmit data. The RTP protocol contains the SEQ field. In addition, THE NACK mechanism used by WebRTC is different from that used by TCP. Each piece of TCP data needs to be confirmed by ACK. If an ACK is not received, it resends until an ACK is received successfully or the connection is disconnected. NACK allows packets to be lost. If the retransmission fails several times, the packet is not transmitted. And even if the packet is retransmitted, it is discarded when the receiver discovers it has expired.

10. Will WebRTC use QUIC protocol?

This question is much debated. WebRTC has been trying to use QUIC protocol, from my point of view, the most important QUIC protocol is to solve Http3, Http3 is to solve the problem of TCP, it is necessary to ensure the reliability of data, then real-time will be affected, when QUIC if can solve the real-time problem can be used, otherwise can’t.

From my perspective, it’s better to have one protocol that solves one thing. It’s hard to have one protocol that solves everything.


Recommended reading

WebRTC Audio and Video Real-time Interaction technology — Principle, Actual combat and source code Analysis — Li Chao

The book has a systematic explanation of WebRTC, both principle, and actual combat, from WebRTC is how to achieve real-time audio and video communication, to how to use WebRTC library to achieve audio and video communication, and then to the analysis of WebRTC source code, gradually explain. In addition, this book focuses on the analysis of the transmission system of WebRTC, and I believe that readers can get a glimpse of the mystery of WebRTC transmission through this book.


August 5, | LiveVideoStack share online

Interactive lucky draw


In the scanQr codeLearn more about the conference