Background:
The author is in charge of a project with high requirements on real-time performance. Low delay is a technical indicator, so we hope to reduce delay as much as possible to optimize service experience.
So you need to do some research on UDP.
Our data protocol is based on the Reliable Network Transport protocol, which has some association between packets and does not allow packet loss.
Migrating TCP to use Reliable UDP may therefore be an optimal direction.
Ps: This paper is handling, sorting, partial translation, plus some of my own understanding, experimental data from the network. This paper is still in the research stage, and I did not compare each RUDP scheme in person, but the data and theory in this paper are from industry leaders, I hope to provide you with reference. Also welcome everyone to communicate with me, if there is something wrong, welcome to correct!
UDP’s brief introduction:
TCP is a connection-oriented, reliable, ordered transport layer protocol, while UDP is a datagram oriented, unreliable, disordered transport protocol, so UDP does not establish any connections.
What factors determine the size of each UDP datagram sent?
-
The length of a UDP packet cannot exceed 2^16=65536.
-
Ethernet data frame length, data link layer MTU(maximum transmission unit);
-
The UDP packet sending buffer size of the socket.
The length of a UDP packet is 2^ 16-1. The UDP packet header is 8 bytes, and the PACKET header encapsulated by the IP protocol is 20 bytes. Therefore, the final packet length is 2^ 16-1-20-8 = 65507 bytes. Because UDP is an unreliable protocol, we should try to avoid packet splitting during transmission. So there’s a very important concept here called MTU — the maximum transmission unit. On the Internet, the MTU value is 576 bytes. Therefore, if UDP is used on the Internet, the maximum number of bytes in each datagram is 576-20-8 = 548
What is the maximum size of a packet in UDP? It is suggested that the UDP data should be controlled under 1472 bytes in LAN environment and under 548 bytes in Internet programming environment.
What can cause UDP packet loss? :
-
Datagram fragment reassembly loss: the UDP protocol has a maximum size of 64kb, but the MTU at the data link layer has a maximum size of 5kb, so when you send a large UDP packet, the packet will be fragmented at the IP layer and then reassembled. This process can result in shard packets being lost. UDP has the CRC detection mechanism to discard lost UDP packets.
-
UDP buffer fills:
When the UDP buffer is full and the receiver has not processed the UDP datagrams, the incoming datagrams have no place to store and are discarded.
The client sends UDP data twice, the first 500 bytes, the second 300 bytes, the server receives the packet in blocking mode, the first recvfrom(1000), is received 1000, 500, 300, or other? Due to the boundedness of UDP communication, the received can only be 500 or 300, and due to the randomness and unreliability of UDP, the received can be 300 or 500, or it can block on the recvFROM call until it times out (that is, nothing is received).
On the assumption that the packets are not lost and arrive in sequence according to the sending order, the packet is received in blocking mode on the server side, and the following three calls are made: recvfrom(200), recvfrom(1000), recvfrom(1000). How about the receiving situation? Due to the boundability of UDP communication, the first recvfrom(200) will receive the first 500-byte packet, but because user-space BUF is only 200 bytes, only the first 200 bytes will be returned and the remaining 300 bytes will be discarded. The second recvfrom(1000) will return 300 bytes, and the third recvfrom(1000) will block.
-
Too small UDP Socket buffer: If the UDP packet sent by the Client is too large and the socket buffer is too small to contain the UDP packet, the PACKET will be lost.
-
ARP cache expired: The ARP cache lasts about 10 minutes. When the APR cache list does not contain the MAC address of the other party or the cache expires, ARP requests are sent to obtain the MAC address. Before obtaining the MAC address, UDP packets sent by users are cached in arp_queue. By default, a maximum of three UDP packets are cached. Redundant UDP packets are discarded.
After so many years of development, TCP has enough algorithms and optimizations. In the case of good network status, TCP’s overall performance is better than UDP. When do we have to use UDP?
-
right
The real timeHigh requirements:For example, in the case of real-time meeting, real-time video, if you use TCP, when the network does not retransmit, the picture will definitely delay, or even pile up. If you use UDP, even if you lose a few packets occasionally, it will not affect anything. In this case, UDP is better.
-
multipointCommunications:
TCP needs to maintain a long connection, so when it comes to multipoint communication, node and multiple communication certainly need to set up the two-way connection, and sometimes in the NAT environment, two communications node to establish its direct TCP connection is not an easy thing, and UDP can don’t need to stay connected, it is ok to directly, so the cost is low, And good penetration. UDP is the right thing to do in this case.
-
NAT traversal is required
Here are some Reliable UDP-related or UDP-like transport protocols.
1, QUIC
Quick UDP Internet Connections is a protocol proposed by Google that uses UDP for multi-channel and concurrent transmission.
Advantages:
-
Low connection establishment delay:
HTTPS a full handshake connection process, requires three RTT. Even Session Termination requires at least two RTTS. And QUIC? Based on UDP and the 0RTT secure handshake, data can be sent with only 0RTT in most cases, and the success rate of 0RTT is much higher than that of TLS’s Sesison Ticket.
-
Improved congestion control:
QUIC protocol currently uses the TCP protocol Cubic congestion control algorithm by default, and supports CubicBytes, Reno, RenoBytes, BBR, PCC and other congestion control algorithms. Applications can implement congestion control changes without downtime and upgrades.
-
Multiplexing without queue head blocking:
QUIC’s multiplexing is similar to HTTP2. Multiple HTTP requests (streams) can be sent concurrently on a QUIC connection. But QUIC’s multiplexing has one big advantage over HTTP2. QUIC has no dependencies between streams on a connection. If stream2 loses a UDP packet, it will only affect stream2’s processing. Stream processing before and after Stream2 will not be affected.
-
Connection migration:
So how does QUIC do connection migration? Simply, any QUIC connection is no longer identified by an IP and port quad, but is identified by a 64-bit random number as an ID. In this way, even if the IP or port changes, as long as the ID remains unchanged, the connection is still maintained, and the upper-layer service logic does not perceive the change, will not interrupt, and does not need to reconnect.
Since this ID is randomly generated by the client and has a length of 64 bits, the probability of collisions is very low.
Question:
Although the application of QUIC has been promoted for many years, the QUIC protocol has not reached the stage of mass popularization. The QUIC protocol on IETF is still in draft, and there are two kinds of unstable protocols, Google QUIC and IETF QUIC.
-
1) Routing blocks UDP port 443 (which is where QUIC is deployed);
-
2) Too many UDP packets will be mistaken for attacks by service providers due to QS restrictions, and UDP packets will be discarded;
-
3) Neither routers nor firewalls are currently ready for QUIC.
Chrome has supported the QUIC protocol in an experimental way since 2014. You can check whether QUIC is supported by entering Chrome ://flags to search for quIC.
Currently, only Caddy after version 0.9 supports QUIC protocol web services. Other common Web services such as Nginx, Apache, etc., are not yet supported.
Open source implementation:
-
Chromium:
This is officially endorsed. Many natural advantages, Google official maintenance basically no pit, can follow chrome updates to the latest version at any time. Building Chromium is tricky though, and it has a separate set of build tools. This option is not recommended for the time being. -
Proto – quic:
A QUIC protocol part that has been stripped from Chromium, but its Github page has announced that it is no longer supported, just for experimental use. This option is not recommended for consideration. -
Goquic:
Goquic is a go package for libquic. Libquic was also removed from Chromium and has not been maintained for several years, only supporting QuIC-36. Goquic provides a reverse proxy. The latest Chrome browser is no longer supported. This option is not recommended for consideration. -
Quic – go:
Quic-go is a quIC protocol stack written completely with GO. It is very active in development and has been used in Caddy and licensed by MIT. At present, it is a better scheme.
There is no open source implementation for mobile. But you can get the dynamic library from Chromium mobile terminal to use, Tencent has a demo, github.com/52im/clb-qu… . Reference making the Internet faster: a new generation of QUIC protocols shared in Tencent’s technology practices.
2. WebRTC and DataChannel
WebRTC, short for Web Real-Time Communication, is a technology that enables Web browsers to make real-time voice calls or video chats. Is a technology that Google acquired in 2010 when it bought Global IP Solutions for $68.2 million.
WebRTC provides the core technology of real-time audio and video, including audio and video collection, encoding and decoding, network transmission, display and other functions, and also supports cross-platform: Windows, Linux, MAC, Android. Although WebRTC aims to achieve cross-platform real-time audio and video communication on the Web, developers can easily migrate and apply it outside of the Web platform due to the Native, high quality and cohesion of the core layer code. For a long time WebRTC was the only high-quality real-time audio and video communication technology available for free in the industry.
Advantages:
-
Chrome browser support; Cross-platform; Free;
-
Excellent algorithms and techniques, audio and video algorithms; Congestion control algorithm GCC/BBR;
-
Strong hole penetration ability.
Question:
-
Lack of server solution design and deployment. You can refer to third-party solutions, such as Kurento, Janus, Licode.
-
Transmission quality is difficult to guarantee. P2P, difficult to ensure transmission quality, optimization means are also limited, can only do some end-to-end optimization, difficult to cope with the domestic complex Internet environment, such as trans-regional, trans-carrier, low bandwidth, high packet loss and other scenarios.
-
Lack of support for Native development. The Demo for Android has not been updated since 2016, however the interface has changed. The documentation is not comprehensive. Due to more domain knowledge involved (audio and video acquisition, processing, coding and decoding, real-time transmission, etc.), the whole framework design is complex, and the API granularity is also relatively fine, resulting in even the compilation of engineering projects is not an easy thing. Early because of a lack of the h. 264 codec support, makes the mobile terminal can only be used for a long time VP8 software codec (Google’s own codec standard), lead to poor performance on mid-range phones, coupled with android fragmentation attribute itself, if you don’t do fit for different models, it is difficult to have a unified user experience.
Is WebRTC reliable?
In the big picture, WebRTC is sound:
-
Enabling and speeding up cross-platform, cross-browser communication is a major reason for the popularity of GoogleWebRTC.
-
The big players are all following, from the bottom chip vendors to the top app vendors, such as Intel, ARM, Microsoft, Apple, Polycom, Vidyo, etc.
-
Domestic manufacturers providing audio and video PaaS and SaaS services are all developed based on WebRTC, including QQ, wechat, and partners from Tencent and YY.
-
Google has made successful apps like Hangout and Duo based on WebRTC;
-
WebRTC is very suitable for live scene: push stream using WebRTC, watch using the existing CDN scheme, combined with the use of Huajiao live, Qianfan live is the best case;
-
There are also many examples of extracting parts of modules separately for use.
But for small teams, it’s difficult:
-
WebRTC code is huge, the environment configuration is complex, the threshold is relatively high;
-
WebRTC is only a client, more suitable for 1-to-1, not suitable for many-to-many video calls, such as video conferencing. If you want to support multiple parties, you have to use other servers, but this is a relatively threshold technology;
-
Although WebRTC provides a lot of signal processing, echo cancellation, encoding and decoding libraries, but directly take it to reach the level of QQ and wechat, still have to make great efforts to optimize, or it will only be wechat 6~7 points;
-
The problem of server deployment, the effect of wechat can be so good, because it has deployment servers in many places, which can minimize the delay and ensure the call quality, but if P2P is completely adopted, it is difficult to achieve the same level;
Third party real-time audio and video cloud service SDK:
-
1. Tencent SDK;
-
2. Sonic Network, YY team;
-
3. QQ team;
-
4. Participation, WebEx/Cisco team;
-
5. Zoom, Cisco team;
-
6. Vidyo, nailing solution supplier;
-
7. Century Dingdian, Inke solution supplier;
-
8, one, SDK has not been provided, but the technology is really great;
-
9, anyRTC.
Is it feasible to develop real-time audio and video based on open source WebRTC? What are the SDKS for party 3?
Refer to the architecture of open source real-time audio and video project WebRTC for detailed explanation and practice summary.
WebRTC: It looks beautiful, but how many holes do we have to fill before it can be used in production?
In conclusion, WebRTC is trying to make audio, video and streaming media technology cheap, but it still requires a lot of expertise and development costs to use it in a complex network environment.
Going back to Reliable UDP, WebRTC’s technology stack is as follows:
The DataChannel can be used as a Reliable UDP channel to send data. It is based on THE SCTP protocol, and QUIC is being tested as a possible alternative to THE SCTP protocol.
Webrtc uses a modified clipped version of the SCTP protocol. The SCTP protocol is over UDP at the same level as UDP at the transport layer. Because SCTP is carried over DTLS, the Multi homing function of SCTP is not used in weBRTC data channel.
Two drafts describe the improved methods of WebRTC, Draft-IEtf-RTCweb-data-channel-13 and Draft-IEtf-RTCweb-data-protocol-09. According to the description, WebRTC introduces the following methods based on actual scenarios: Reliable transmission mode, partially reliable transmission mode, unreliable transmission mode, etc.
And WebRTC audio and video channel is based on SRTP protocol to achieve the IMPLEMENTATION of GCC/BBR congestion control algorithm on SRTP. DataChannel cannot use these congestion control algorithms either.
Introducing the WebRTC technology in order to use Reliable UDP was a bit costly.
3, SCTP
Stream Control Transmission Protocol (SCTP) is a transport Protocol at the transport layer defined by the Internet Explorer Task Force (IETF) in 2000. It has both TCP and UDP features.
-
TCP is transmitted in bytes and SCTP is transmitted in data blocks
-
TCP is usually single-path transmission, while SCTP can be multipath transmission
-
TCP is single-stream ordered transmission, while SCTP can be multi-stream independently ordered/unordered transmission
-
TCP connection establishment requires a three-step handshake, while SCTP connection establishment requires a four-step handshake
-
SCTP has a heartbeat mechanism to manage the availability of paths
SCTP was incorporated into the 2.6 Linux kernel. (But some routers don’t support this transport layer protocol?)
At the earliest, STCP introduced the reliability transmission mechanism of narrowband 7 signaling to IP protocol and optimized the limitation of TCP protocol that can not be divided into frames. Due to its late appearance, and SCTP was originally designed for signaling transmission, and mainly adapted to multi-stream applications, it is widely used in the field of telecommunications. But it is rarely used on terminals, and even Windows does not support the protocol, so it is not widely used.
Combining WebRTC’s experience, it is not advisable to use STCP provided by socket standard interface directly, and SCTP over UDP is the wise choice. But there are few open source projects, and the ecology is poor.
Next up is Reliable UDP’s open source solution.
4, KCP
Making: github.com/skywind3000… 7.7 k stars
TCP is designed for traffic (how many kilobytes of data can be transmitted per second). It takes full advantage of bandwidth. KCP is designed for flow rates (how long it takes for a single packet to get from one end to the other) and delivers 30-40% faster than TCP at the cost of 10-20% wasted bandwidth. The TCP channel is a large canal with slow but high flow per second, while the KCP is a small rapids with rapid water flow.
In order to ensure scalability and provide the most basic Reliable UDP capability, KCP is implemented by pure algorithm and is not responsible for the sending and receiving of the underlying protocol (such as UDP). Users need to define the sending mode of the underlying data packets and provide it to KCP in the form of callback. Even the clock needs to be passed in from outside, and there won’t be a single system call from inside. Supports multiplexing.
KCP is equivalent to some optimization of TCP flow control algorithms, its greatest significance is to move the configuration of these algorithms to the application layer, to provide external scalability and customizable ability.
There are many extension libraries and applications based on KCP.
Case study:
-
Tomorrow Never Dies: Game K17’s Tomorrow Never Dies (Google Play) uses KCP to speed up Game messaging and connect players around the world
-
Immortal battle: 4399 MOBA games, using KCP optimization game synchronization
-
CC: netease CC uses KCP to accelerate video streaming and effectively improve fluency
-
BOBO: netease BOBO uses KCP to accelerate the streaming of anchors
-
Yunfan acceleration: KCP is used to accelerate file transfer and video streaming, optimizing the smoothness of Taiwan anchor’s streaming
-
SpatialOS: Massively multiplayer distributed game server engine, a successor to BigWorld, using KCP to speed up data transfers.
-
Lantern: Better VPN, Github 50000 stars, accelerated with KCPGo
5, UDT
Liverpoolfc.tv: sourceforge.net/projects/ud…
Udt-based Data Transfer Protocol (UDT) is proposed to address the poor performance of TCP in long-distance transmission of large amounts of Data. Based on UDP, UDT introduces new congestion control and reliability, and supports reliable streaming transmission (similar to TCP). And partially reliable datagram transport (enhanced UDP).
UDT native control algorithms are designed for mass data transmission over high bandwidth delay-product networks. Therefore, in daily use, some application scenarios will raise the problem of poor UDT effect, especially in the wireless network environment.
6, ENet
Making: lsalzman/enet 1.5 k stars
Website: enet.bespin.org/Features.ht…
-
Provides optional packet delivery reliability
-
The performance is between TCP and UDP
7, Raknet
Making: facebookarchive/RakNet 2.8 k stars
Website: www.jenkinssoftware.com/features.ht…
Raknet was originally designed as a web library for multiplayer combat games, but has since been refined and commercialized. In 2014, it opened source under BSD protocol (you can use it freely and modify the source code). In addition to supporting reliable and multi-channel transmission, Raknet also includes common features of the game in the application, such as HTTP transmission, voice transmission, NAT penetration, email transmission and message encryption.
Advantages:
-
cross-platform
-
RakNet automatically uses SHA1, AES128, SYN in your code, using RSA to protect the transmission from attack.
-
Audio transmission is decoded using Speex encoding, and 8-bit audio takes only 500 bytes per second to transmit.
-
Remote terminal with RakNet, you can remotely manage your programs, including program Settings, password management and log management.
-
Directory servers Directory servers allow servers to list their own clients and connect to them.
Raknet adds the concept of a transport channel (multiplexing socket) over a single connection to improve data transmission efficiency. A transport channel is only for the sender, and the receiver does not see the concept of a channel.
According to the requirements and characteristics of data packets, the transmission mode and transmission channel of Raknet can be used to refine the sending rules and squeeze the performance. Taking RPG gameplay as an example, here’s a list of packet characteristics and try to deduce their transport modes and channels:
The packet |
demand |
PacketReliability |
OrderingChannel |
---|---|---|---|
Hero position |
Only care about the latest character position |
RELIABLE_SEQUENCED |
1 |
Hero skills |
Skill combination effect needs strict order relation |
RELIABLE_ORDERED |
1 |
Hero health |
Missing packets affect the outcome of the match, and the health UI shows only the latest health without significant transitions |
RELIABLE_SEQUENCED |
2 |
Text chat |
Strict dialogue sequences and missing content can lead to ambiguous topics |
RELIABLE_ORDERED |
2 |
Quick chat |
Incorrect order of party info does not affect combat stats. |
RELIABLE |
There is no |
Refer to the Raknet study.
Disadvantages:
-
Raknet can theoretically support up to 4W messages per second of ping pong testing between multiple clients and servers. But it is unstable, and if messages pile up for some reason, the response time to send and receive can be severely affected by seconds.
-
The last update was in 2015.
-
Many business layer functions, and lack of maintenance, are basically obsolete.
8 Aeron.
Github: Real-Logic/Aeron 4.9K stars
Liverpoolfc.tv: real – logic. Co. UK /
Aeron is designed for UDP unicast, multicast, and large data IPC communications. Java/C++/.Net clients are provided, which can efficiently transfer data between different computers or within the same computer through IPC. In addition, message flows can be persisted by the archive module for later (or live) replay.
The Aeron protocol is designed to be run directly over many different types of transmission media, including shared memory/IPC, InfiniBand/RDMA, UDP, TCP, Raw IP, HTTP, WebSocket, BLE, etc.
Performance is a key focus for Aeron, which aims to achieve the highest throughput and lowest predictable latency in any messaging system. The binary encoding module, SBE, claims to be eight times faster than Google’s ProtoBuf. In addition, Aeron claims that it beats the best in throughput and matches 90% of the best commercial products in latency. It can push small 40-byte packet messages at a rate of 6 million per second.
Aeron’s goals and usage scenarios are as follows:
-
High throughput and low latency communication for unicast and multicast
-
Supports multiple transmission media (UDP, InfiniBand, shared memory, etc.)
-
Supports multiple streams and provides different QoS
-
Efficient flow control algorithms for unicast and multicast
-
The receiver program can speed control by flow
Aeron is modern and references SPDY,HTTP2,WebSocket and other protocols.
Currently, Aeron has few Chinese documents, but its official documents are very detailed and have strong maintenance efforts. Refer to its transport protocol definition.
It can realize different congestion control algorithms according to its own needs, and currently only Cubic congestion control algorithm is provided.
Aeron: Do We Really Need Another Messaging System? .
In summary, Aeron is a relatively complex framework that targets not only Reliable UDP, but also cross-terminal and cross-protocol efficient communications with relatively high cost of use and customization. But it has a dedicated team to maintain it. In the field of micro services, RPC will have a better application scenario.
9 RSocket.
Making: rsocket/rsocket
RSocket is a new application-layer network protocol developed by engineers at Facebook, Netifi, and Pivotal that provides implementations in Java, Kotlin, JavaScript, Go,.net, and C++. RSocket can use different underlying transport layers, including TCP, WebSocket, and Aeron. TCP is suitable for the interaction between components of distributed system, WebSocket is suitable for the interaction between browser and server, Aeron is based on UDP protocol transmission mode, which ensures that RSocket can be adapted to different scenarios.
RSocket supports four interaction modes:
model |
instructions |
---|---|
Request-response (request/response) |
This is the most typical and common pattern. After sending a message to the receiver, the sender waits for the corresponding response message. |
Request-response stream (request/stream) |
Each request message from the sender corresponds to a message flow from the receiver as a response. |
Fire and forget |
The sender’s request message has no corresponding response. |
Channel mode |
To establish a two-way transmission channel between sender and receiver. |
The communication mode of RSocket is peer-to-peer communication, which is no longer between the traditional understanding of Client -> Server mode. RSocket does not have this concept, everyone’s status is peer, we can call your service on the Server side, you can also call mine.
Application scenarios and Ecology:
-
RSocket &&dubbo, dubbo in version 3.0.0-Snapshot provides support for responsive programming based on RSocket. Users can easily use the syntax of RSocket.
-
Spring Framework 5.2 is going to use RSocket as the default communication protocol, and Spring Boot provides support for RSocket.
-
RSocket && Microservices, the main obstacle to RSocket is that applications must communicate with each other using RSocket. After microservices became popular, many layers of technology stacks were introduced to “simplify” communication between microservices.
See Network for Reactive data transfer, responsive programming using RSocket.
Conclusion: RSocket is the application layer encapsulation of some reliable network transport protocols, and provides a friendly API for the upper layer to use, which can facilitate the implementation of TCP/WebSocket/UDP and other transport layer and application layer protocol switch. In distributed system integration, RSocket is a good choice.
Open source RUDP solution performance comparison
Comparison of transmission performance between TCP and RUDP in the case of packet loss in weak networks:
The horizontal axis represents RTT(round trip time) and the vertical axis represents the proportion of output completed in the table. The legend in the upper right corner highlights RUDP’s advantage over networks with high packet loss rates. RUDP completes about 70% of the data transfers in [50-150] milliseconds, whereas TCP transfers are evenly distributed across the delay zones. According to the other three data graphs, RUDP can better reflect that compared with TCP, RUDP has completed the transmission of most data in a shorter time delay.
1. Raknet vs. Libenet:
Server logic single process running at 1US frame rate, go back to client packet
Client logic operates randomly per process at a frequency of 30ms:
Re-establish a connection with the target server
Sends a ping packet to the target server
Random transmission mode and transmission channel and other combination of parameters to send out-of-order packets to the target server
2. Comparison of UDT, KCP and ENet (The original document) :
The scenarios for this test are for real-time combat games, such as first-person multiplayer shootouts. Real-time PVP games are characterized by small and frequent packets. It requires as little delay as possible.
Test environment:
-
The test server is deployed over the Internet with a bandwidth of 5M
-
When client is run on the computer, the bandwidth of ADSL is 10M
-
The above two bandwidth values are much larger than the actual required bandwidth (about 10 times).
-
The client will be 500 every 50 milliseconds
bytes
The data is sent as a packet. (Another test is 50 bytes) -
The server sends back data after receiving the packet.
Test results:
UDT:
-
UDT doesn’t do well in real-time PVP games
-
But under normal circumstances, the delay is perfect
-
UDT performs poorly when network latency occurs
-
Serious cases are delays of more than a few seconds or even ten seconds. It is not expected to recover
ENet:
-
Enet performs better than UDT in real-time PVP games
-
The delay time for ENET is about 1 second. And the delay to recovery takes seconds
-
Worse than KCP, but some games allow a 1 second delay
KCP:
-
The KCP delay is always less than 1 second.
-
KCP is better than UDT and ENET. When network latency occurs, the KCP delay is less than 2 seconds.
Conclusion:
-
KCP is the first choice for real-time PVP games (documentation is in Chinese)
-
When network delay occurs, the delay of KCP is less than 1 second, and the efficiency of KCP is three times that of ENET
-
If your game allows a 2-second delay, enET is a good choice (enET is poorly documented)
3. Comparison between KCP and RakNet (original document, MASSIVELY multiplayer game server engine SpatialOS) :
Over unreliable networks, KCP outperforms TCP and RakNet in terms of latency. For 25 entities, the maximum RTT of 436271 packets is 51ms and RakNet is 114ms.
For 50 entities, KCP performed better. For 99.8% of the data, THE RTT of KCP was 44ms while that of RakNet was 243ms. The maximum RTT of KCP is 83ms while that of RakNet is 327ms.
4. Tencent Cloud Game’s in-depth test of ENet (original address) :
The original ENET retained the exponential avoidance feature of TCP retransmission, and the interval for each retransmission was still multiplied by 2. The default RTO was also high, which may be the main reason why ENET performed worse than KCP in the above test. If enET code was slightly adjusted, what would the result be?
The authors made some slight adjustments to Libenet — the default RTT was changed from 500ms to 50ms, and the exponential avoidance strategy for timeout retransmission was removed.
In terms of average response, TCP has no obvious disadvantage. When the delay is 30ms and the packet loss rate is 1%, the average RTT of the improved ENET is 69ms, the average RTT of the original ENET is 67ms, and the average RTT of TCP is 67ms. However, when the response time exceeds 300ms, when the delay is 30ms and the packet loss rate is 1%, the improved ENET RTT exceeds 300ms is 0, while the TCP RTT exceeds 300ms is more than 2%. If it is in a game, this performance can significantly affect the game experience. The results show that TCP has a big problem even when the network is a little unstable, and the improved ENET has obvious advantages.
The application case
Application of students with excellent grades and Yuan Rongxi big brother:
-
RUDP + multi-point relay intelligent routing scheme is used for real-time 1V1 query with global 250 ms delay.
-
500 ms 1080P video interactive system, using RUDP + PROXY scheduling transmission scheme.
-
A 6-way real-time synchronous writing system using RUDP + Redo log reliable transfer technology.
-
The 720P same-screen transmission system of Pad under weak network WIFI adopts RUDP+ GCC real-time flow control technology.
-
Large live P2P distribution system saves more than 75% distribution bandwidth through RUDP + multi-point parallel relay technology.
Refer to RUDP transfers.
Mobas and “Eat Chicken” games use UDP. Reference juejin. Cn/post / 684490…
Conclusion:
TCP has evolved over the years and is more stable than UDP. However, some RUDP schemes are also considered to replace TCP if there are high requirements in terms of latency and throughput. The author summarizes these RUDP libraries as follows:
- KCP is widely used in some multiplayer games, with relatively complete ecology and strong customization capabilities.
- Aeron is a modern RUDP solution that aims not at RUDP but at efficient cross-end data transfer and is well-documented. However, its internal implementation is more complex, and there is a certain cost to use and customize.
- RSocket is an encapsulation of Aeron, aimed at point-to-point communication, and supported by the Spring framework. If there is a business need, you can easily switch and migrate the transmission protocol between TCP/WebSocket/RUDP. However, such a highly encapsulated library will naturally be less able to customize the underlying transport protocol.
- WebRTC’s DataChannel can be used as RUDP, but it is an implementation of STCP over UDP and has poor customization capability. Moreover, the cost of introducing a complex technical framework like WebRTC in order to use RUDP is high.
- SCTP over UDP is not feasible in the short term.
- QUIC is also a more complex protocol with less customizable capabilities, but different congestion control algorithms can be configured at the business layer, and the internal implementation is very advanced and modern. But it lacks ecology.
- The UDT/ENet/ RAknet update is not strong enough, it feels like being abandoned.
The above is my summary, you also need to consider according to their actual business needs.