One: flow control

What is flow control? The purpose of flow control?

If the sender sends data too quickly for the receiver to receive, packets will be lost. In order to avoid packet loss, it controls the sending speed of the sender so that the receiver can receive the packet in time. This is flow control. The fundamental purpose of traffic control is to prevent packet loss, which constitutes the reliability of TCP.

How to achieve flow control?

Implemented by the sliding window protocol (continuous ARQ protocol). The sliding window protocol not only ensures error-free grouping and orderly receiving, but also realizes flow control. The main way is that the ACK returned by the receiver will contain the size of its own receive window, and use the size to control the data sent by the sender.

Deadlock caused by flow control? How do I avoid deadlocks?

When the sender receives a reply with a window of 0, the sender stops sending and waits for the receiver’s next reply. But if the reply with a non-zero window is lost during transmission, the sender keeps waiting while the receiver thinks the sender has received the reply and waits to receive new data, so the two parties wait for each other, resulting in a deadlock. To avoid deadlocks caused by flow control, TCP uses a persistence timer. This timer is started every time the sender receives a reply with a zero window. When the time is up, it proactively sends a message to ask about the size of the recipient’s window. If the receiver still returns a zero window, reset the timer to continue waiting; If the window is not 0, it indicates that the reply packet is lost. In this case, the sending window is reset and the packet is sent to avoid deadlock.

Two: the difference between congestion control and flow control

Congestion control: Congestion control works on the network to prevent too much data from being injected into the network. The commonly used methods are :(1) slow start, congestion avoidance (2) fast retransmission, fast recovery.

Flow control: Flow control is applied to the receiver. It controls the sending speed of the sender so that the receiver can receive it in time to prevent packet loss.

Three: congestion control algorithm

We started by assuming that: 1. The data is passed in one direction, and the other window only sends acknowledgements; 2. The cache of the receiver is large enough that the size of the sender is determined by the degree of network congestion.

(I) Slow start algorithm:

The sender maintains a state variable called congestion window CWND (congestion Window). The size of the congestion window depends on the level of congestion on the network and changes dynamically. The sender makes its sending window equal to the congestion window, and considering the receiving capacity of the receiver, the sending window may be less than the congestion window.

The idea of the slow-start algorithm is that instead of sending a lot of data at the beginning, you detect congestion on the network, which means you increase the size of the congestion window gradually.

Here, the number of packet segments is used as the size of the congestion window to illustrate the slow-start algorithm. The actual size of the congestion window is in bytes. The diagram below:




As can be seen from the figure above, the time experienced by a transmission round is actually the round-trip time RTT, and the congestion window CWND doubles without a transmission round.

To prevent network congestion caused by excessive CWND growth, a slow start threshold ssthRESH state variable should also be set. Ssthresh is used as follows: When CWND < SSTHRESH, the slow start algorithm is used. When CWND > SSTHRESH, congestion avoidance algorithm is used instead. When CWND = SSTHRESH, slow start with congestion avoidance algorithm arbitrary

By “slow” I do not mean that the CWND grows slowly, but rather that the CWND is set to 1 when TCP starts sending segments, and then gradually increases, which is much slower than the large CWND that suddenly injects many segments into the network.

(II) Congestion avoidance algorithm:

The congestion avoidance algorithm makes the congestion window grow slowly, that is, RTT increases the sender’s congestion window CWND by 1 instead of doubling it every round trip time. So the congestion window grows slowly in a linear fashion.

No matter in the slow start stage or the congestion avoidance stage, as long as the sender determines that the network is congested (the basis is that the sender does not receive the confirmation on time, although the confirmation may be the packet loss for other reasons, it is impossible to determine, so it is treated as congestion), Set the slow start threshold ssthRESH to half (but not less than 2) the size of the send window when congestion occurs. Then the congestion window CWND is reset to 1 and the slow start algorithm is performed. The goal is to quickly reduce the number of packets sent to the network by the host so that the congested router has enough time to clear the backlog of packets in the queue.

The whole congestion control process is shown as follows:




(1) CWND of congestion window is initialized to a message segment, and the initial value of slow start threshold is 16. (2) The slow start algorithm is implemented, and the exponential law increases to the fourth round, i.e. CWND =16= SSTHresh. Instead, the congestion avoidance algorithm is implemented and the congestion window increases linearly. If a timeout (congestion) occurs on the network, ssTHRESH =12 is updated, CWND is reset to 1, and the slow start algorithm is executed. When CWND =12= ssTHRESH, perform the congestion avoidance algorithm instead

About Multiplicative Decrease and Additive Increase:

“Multiplication reduction” means that the sender sets the slow start threshold SSthRESH to half of the size of the sending window when the network is congested as long as it determines whether the network is congested in the slow start stage or in the congestion avoidance stage, and executes the slow start algorithm. Therefore, ssthRESH drops rapidly when the network is congested frequently. To greatly reduce the number of packets injected into the network. “Add up” means that the congestion window slowly increases after the implementation of the congestion avoidance algorithm to prevent premature congestion. Often combined to form the AIMD algorithm.

Note: “congestion avoidance” does not completely avoid congestion, but makes the network less prone to congestion.

(III) Fast retransmission algorithm:

Fast retransmission requires the receiver to send repeated acknowledgement immediately after receiving an out-of-order segment (in order to enable the sender to know in advance that a segment has not reached the other party, which can improve the network throughput by about 20%) rather than wait until it sends data with random acknowledgement. According to the fast retransmission algorithm, the sender should immediately retransmit the unreceived packet segment as long as it receives three consecutive repeated acknowledgements, rather than waiting for the retransmission timer to expire. The diagram below:




(IV) Fast recovery algorithm:

Fast retransmission with the use of fast recovery algorithm, there are the following two points:

When the sender receives three consecutive repeated acknowledgements, the multiplication reduction algorithm is performed to halve the SSthRESH threshold (to prevent network congestion). But instead of performing the slow-start algorithm, the sender now thinks that the network may not be congested because it would not receive multiple duplicate acknowledgements if the network was congested. Therefore, instead of executing the slow start algorithm, the CWND is set to the value after ssTHRESH is halved, and then the congestion avoidance algorithm is executed to make the CWND grow slowly. The TCP Reno version is by far the most widely used.




Note: When using the fast recovery algorithm, the slow start algorithm is used only when the TCP connection is established and the network times out