We know that packet forwarding on the packet switch also takes time, so there is a corresponding cache on the packet switch for storing incoming packets. Obviously, the cache size on the packet switch is limited. If the network has too many packets, the cache on the packet switch is full, resulting in packet loss.

Network congestion costs:

  • When the packet arrival rate approaches the link capacity, the packet experiences a huge queuing delay. ,
  • The sender must perform retransmission to compensate for lost packets due to cache overflow.
  • Unnecessary retransmission by the sender in the event of large time delays causes the router to use its link bandwidth to forward unnecessary packet copies.
  • When a packet is lost along a path, the transport capacity used by each upstream router to forward the packet is ultimately wasted.

TCP provides end-to-end congestion control. The end system determines network congestion by observing network behaviors.

End-to-end congestion control means that the network layer cannot provide explicit help for congestion control at the transport layer, otherwise it is called network-assisted congestion control.

How do I limit the sending rate?

Similar to TCP traffic control, the sender maintains a variable called the congestion window and limits the transmission rate by controlling LastByteSent – LastByteAcked <= min(CWND, RWND), that is, the amount of sent and unacknowledged data is smaller than the congestion window and the receive window.

Ignoring RWND (receiving window), adjust the rate at which the connection sends data by adjusting the CWND (congestion window) value.

How do I sense network congestion?

We define a TCP sender’s packet loss event as either a timeout occurs or three redundant ACKS are received from the receiver. In other words, if no packet loss event occurs, the network is in good condition. If packet loss occurs, the network is congested.

How do I adjust the sending rate?

TCP senders want to send at the highest possible rate without congestion on the network.

  • A packet loss event indicates congestion, so the sending rate should be reduced when a packet segment is lost.
  • The arrival of an acknowledgement of a previously unacknowledged message segment can increase the rate of the sender.
  • The sender will test the rate at which congestion starts and back from that rate to continue the test.

TCP congestion control algorithm

TCP congestion control algorithm includes three parts: (1) slow start; (2) Congestion avoidance; Quick recovery.

Slow start starts at a low rate, grows exponentially, and if it is close to the rate at which congestion occurs (perceived by slow start threshold, i.e. Ssthresh variable), enters congestion avoidance, stops exponential growth, and moves to linear growth.

In addition, TCP congestion control algorithm treats timeout and receiving three redundant ACKS differently. Although they both indicate packet loss, timeout means greater network congestion than receiving three redundant ACKS. Therefore, a slow restart occurs when timeout occurs, while a fast recovery occurs when three redundant ACKS are received.

In fast recovery, can detect whether mild congestion (no need to beat rate to trough), so each will receive an ACK a redundant index increase transmission speed, until timeout (severe congestion, start slow start) or receiving new ACK (mild congestion, congestion has lifted, into linear increase congestion avoidance to send rate).

The whole congestion control can be summed up in one sentence: TCP senders want to send at the highest possible rate without congestion. Therefore, the sending rate of the sender is zigzagged over time.