This chapter directory
- 1, what is TCP, UDP?
- 2. Transport layer
- 3. What are the characteristics of TCP connection?
- 4. TCP header format
- 5. TCP state machine
- 6. Three handshakes?
- Seven. Four waves?
- 8. How can TCP connections be reliable?
- What is the UDP header format?
- 10. What are the characteristics of UDP connections?
- 11. What are the scenarios in which UDP connections are used?
Structure diagram of this chapter
1, what is TCP, UDP?
1.1. Why do you want to layer the network?
In my last article, I talked about layered networks, I talked about the TCP/IP group model,
As follows:
But is there a question as to why the network should be layered?
A computer network is a very complex system. To establish a connection between two hosts, many scenarios must be considered. If we want to transfer a file between two hosts, then we need to consider the following methods:
- (1) Line: the host that initiates the communication must first activate the transmission line to ensure that the data to be transmitted can be sent and received normally on this line;
- (2) Receiving host: to tell the network data to which host to send;
- (3) Ensure arrival: the host that initiates the communication should ensure whether the host that receives the communication has been turned on and whether it is in normal working state;
- (4) ensure the receipt of files: the host initiating the communication should ensure that the other host is ready to accept and process the files;
- (5) Confirm the file format: ensure that the receiving host is ready to handle the file format, if not, one of the two hosts must be able to handle the file conversion;
- (6) Ensure the normal arrival of data: if abnormal requests are encountered in the transmission process, such as packet loss, network exception, transmission error, duplication, etc., to ensure that the data can be normally transmitted to the receiving host, it must have a variety of fault tolerant processing;
Here are just a few examples, there are many other scenarios to deal with, I won’t list them all here;
It can be seen from here, just a file transfer, all need to consider so many scenarios, you can imagine, computer network is how complex;
Now that these problems exist, let’s think about how to solve them;
Since there are so many scenes, can we classify and stratify these scenes and assign each type of scene to a certain category for processing?
The answer is: yes;
The above file transfer, we can classify into three layers: file transfer, communication services, network access layer;
Take a look at the general flow chart:
Of course, the above is just our assumption, the actual module partition is far more complex than this;
The original OSI model, for example, had seven layers;
Are you wondering, what are the benefits of this layering? See below;
Benefits of layering:
- (1) Each layer is relatively independent and only needs to deal with its own modules. Complex problems can be solved by dividing relatively complex problems into several small parts and handing them over to the corresponding modules to deal with;
- (2) Good flexibility. If the logic of a certain layer changes, as long as its open interface does not change, the work of other layers will not be affected, and the modules are relatively flexible;
- (3) The structure can be separated, and each layer can adopt data suitable for itself without considering the influence of other modules;
- (4) Easy to implement and maintain. If something needs to be modified at a certain layer, as long as its open interface is not changed, its modification will not affect the work of other layers, because the coupling between modules is low;
- (5) Can provide precise services, each layer of responsibility is clear, will provide corresponding services;
1.2 what are TCP and UDP?
TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are transport layer protocols, based on the network layer IP protocol to achieve;
TCP protocol provides reliable transmission for the communication between two hosts. Data can be divided into suitable transmission sizes, and a series of operations such as timeout retransmission and arrival confirmation are used to ensure accurate transmission of data. By providing end-to-end reliable transport, software at the application layer does not need to care about the details of transport;
UDP protocol provides simple and fast data transmission, but does not guarantee the accuracy and stability of data, so the application layer software using this protocol needs to consider the stability and accuracy of the problem;
2. Transport layer
2.1 how does the transport layer work?
When the application program is communicating, it needs to establish a connection first, and the foundation of establishing a connection needs to first mark its own process ID, that is, PID, if only local communication, PID basically will not conflict, but when communicating with other hosts on the network, PID conflict probability will greatly increase;
So how do you solve this problem?
We know that the IP address of the IP layer protocol is unique in the network and will not conflict with other hosts, so we can use this to avoid port conflict, that is, the implementation of IP address + port number;
This identifier is the socket communication, the final format: socket = (IP address: port number);
2.2. What is socket?
Socket translates to socket, which is used by applications to communicate over a network.
A socket is an endpoint used by two applications to transfer data through the transport layer. A socket is an abstract concept that is implemented by a computer.
For example, the current host and another host for data transmission, data will be written to the socket, and then transmitted through the transmission layer to another host socket, and then the application read data from the socket;
2.3. SocketAPI functions
Socket is “open – read/write – close” implementation mode, the specific implementation is as follows:
- Socket () : Creates a socket based on the address type, socket type, and protocol type.
- Bind () : usually used by a server to bind an IP address and port number to a socket.
- Listen () : used by the server to listen for requests on the port;
- Accept () : used by the server to receive connection requests from clients;
- Connect () : used by the client to connect to the port of the specified computer;
- Send ()/recv() : reads and sends data from the socket.
- Close () : closes the socket.
- Gethostbyname () and gethostByAddr () : used to resolve the host name and address.
- Select () : used to trim a list of sockets that are ready to read, ready to write, or have an error.
- Poll () : Used to check the status of the socket. A socket can be tested to see if it is writable, read, or has an error.
- Getsockopt () queries the current value of a particular socket option for the specified socket.
- Setsockopt () is used to set a specific socket option for the specified socket.
2.4 Basic process of sockets in TCP
2.5 Basic Process of Sockets in UDP
3. What are the characteristics of TCP connection?
3.1 connection-oriented
TCP is a connection-oriented protocol. Before communication, the connection must be established and released.
3.2. Point-to-point
Each TCP connection can have only two endpoints, not one-to-many or many-to-one. The transmission modes of multicast and broadcast are not supported.
3.3 Reliable delivery
TCP provides reliable delivery, ensuring that data is transmitted to each other in an orderly manner without error, duplication, or loss.
3.4 Full-duplex communication
Both sides of TCP communication can simultaneously send data to each other and receive the data sent by the other side, and there are local Settings send cache and receive cache, for the application program to send data and data, so that the application does not need to care about the sending and receiving of data, only need to hand the data to the transport layer;
3.5. Byte stream oriented
Stream: a sequence of bytes flowing into or out of a process;
The data to be transmitted by the application is written to the transmission cache of TCP, and then TCP takes out a certain sequence of bytes from the transmission cache for transmission according to the actual transmission. After receiving the data, the other end puts the data into the receiving cache, and the application reads the byte data from the receiving cache.
In this case, the sequence of bytes written by the application to the send cache does not mean the sequence of bytes sent by TCP. TCP will split the sequence of bytes in the send cache into suitable sequences for transmission.
This diagram only shows the general flow. In fact, data can only be transmitted through the ENCAPSULATION of IP layer and data link layer.
What is the format of TCP header?
4.1. Look at the structure of TCP data:
4.2. Look at the structure of TCP header segment:
-
(1) Source port number and target port number: each TCP segment contains the source port number and target port number, which are used to confirm the source of TCP data and the port number to be sent. The program on the computer obtains TCP packets by listening to the port number.
-
(2) Serial number: used to mark the order of the TCP packet, because in the transmission process, TCP will first cut the data into countless small packets, after marking the order, when sent to the terminal, the restitching will not make mistakes, used to solve the problem of packet confusion; Since each byte transmitted is counted, the acknowledgement sequence contains the next sequence that the end sending the acknowledgement expects to receive. Therefore, the confirmation sequence number should be the last time the data was received successfully byte number plus 1. The confirmation sequence number field is valid only when the ACK flag (described below) is 1.
-
(3) Confirmation serial number: it is used to confirm the serial number of the next packet that the receiving terminal needs to receive. By setting the serial number of the current packet +1, it is used to ensure the sequence of packet transmission;
-
(4) Header length: Because the TCP header contains a variable part, it needs a value to represent the length of the current TCP data, that is, the header length;
-
(5) Reserved bits: reserved for later use;
-
(6) Flag bit: refers to the attribute of the packet, which is used to control the STATE machine of TCP; URG: indicates whether the packet segment contains critical data. When URG=1, it indicates that there is critical data. ACK: indicates whether the sequence number of the packet segment is valid. When ACK=1, the packet is valid. After the TCP connection is established, the flag bit of the packet sent must be ACK=1. PSH: used to inform the receiver whether the data needs to be transferred to the upper layer immediately. When PSH=1, it takes effect; otherwise, it is cached. RST: When RST is 1, it indicates that an unknown error occurs in the current link and the connection needs to be re-established to ensure normal communication. SYN: used when a connection is established. When SYN=1, it is used to request a connection or a corresponding connection, and is used for THE TCP handshake. FIN: Indicates whether the data has been sent. When FIN=1, it takes effect.
-
(7) Window: the size is expressed in bytes, indicating the size of bytes expected to be received by the receiving end. It is used for traffic control, so that the data packets transmitted are too large to be received by the receiving end, or the data packets transmitted are too small to be received, resulting in the waste of network resources;
-
(8) checksum: used for TCP data checksum, is one of the guarantees of TCP transmission reliability, filled by the sending end, the receiving end to check;
-
(9) Options: for additional functionality implementation;
5. TCP state machine
What is the TCP State Machine?
TCP state machine refers to the process of network communication, state changes;
In TCP communications three-way handshake and four times in the process of wave, is the process of TCP state changes, because the TCP is a connection-oriented, reliable connection, each a host to communicate with each other to establish a connection, and this process will experience connection, transmission, shut down the steps, in the process of the two sides to communicate, TCP state changes are different;
5.2 What are the TCP States?
- LISTEN: listens for TCP port requests from other hosts.
- SYN_SENT: wait for confirmation of the connection request after the connection request is sent.
- Syn-received: the system waits for confirmation to establish a connection after receiving a connection request.
- ESTABLISHED: Indicates that the connection has been ESTABLISHED and data transmission is ready.
- FIN_WAIT_1: waiting for acknowledgement of a remote TCP connection interrupt request or an earlier interrupt request;
- FIN_WAIT_2: waiting for an interrupt request from a remote TCP connection;
- CLOSE_WAIT: the endpoint has received an interrupt request from the remote endpoint and is waiting for a connection interruption request from the local application;
- CLOSING: waiting for the confirmation of the termination request from the remote TCP.
- LAST_ACK: waits for acknowledgement of connection termination requests previously sent to remote TCP;
- TIME_WAIT: Wait enough time to ensure that the remote TCP receives an acknowledgement of the disconnection request.
- CLOSED: No connection status;
Take a look at the flow chart of a state machine from Computer Networks:
As for the changes to the state machine, we will cover them in the following three handshakes and four waves;
6. Three handshakes
What is a handshake?
To humans, a handshake indicates a friendly interaction; To a computer, a handshake means something quite different;
Consider the wiki’s definition of a computer handshake:
The handshake is used to confirm that parameters and other protocol features are agreed upon before information is transmitted.
For the TCP handshake, two hosts exchange TCP data to confirm parameters and sequences. With each handshake, the host sends TCP data to the other;
6.2. What is a three-way handshake?
1. The client sends a message telling the server that I want to connect;
At this time, the host that initiates the connection sends a TCP packet containing the flag bit SYN=1 and the sequence number SEq =x. Then, the host that initiates the connection enters syn-SENT (synchronization has been SENT).
The first handshake is complete;
2. The server tells the client that it received the request and is ready to receive the data.
When receiving a connection request, the host parses the data and responds with a TCP packet containing ACK=1 (indicating that the current acknowledgement is valid), SYN=1 (indicating that the request is valid), and ACK= x+1 (indicating the sequence number of the TCP data expected to be received next). Seq =y (to initialize a sequence number for itself), then the host receiving the connection enters SYN-RCVD (synchronous receive state);
The second handshake is complete;
3. The client sends a message indicating that I want to start sending data, and the server receives the message indicating that the connection is established successfully;
After receiving the acknowledgement from another host, the host sending the request will send another acknowledgement to the host to be connected. The packet contains ACK=1 (indicating that the current acknowledgement number is valid), ACK= y+1 (indicating the sequence number of the expected TCP data to be received next time), seq=x+1 (its own initialization sequence number +1), In this case, the TCP connection is ESTABLISHED successfully, and the host that initiates the connection enters the ESTABLISHED state (indicating that the connection is ESTABLISHED successfully).
The third handshake is complete;
The TCP connection is established successfully.
Take a look at the flow chart:
6.3. Why three handshakes?
Let me quote from the book Computer Networks:
In order to prevent the invalid connection request packet segment from being suddenly transmitted to the server, resulting in errors;
Interpretation of this sentence:
The connection request packet sent from the client to the server is received by the server after the connection is released due to network delay or blocking. In this case, the server sends a packet to the client to indicate that the connection is established successfully.
In this case, the packets sent to the server may be invalid packets. After receiving the packets, the server establishes a connection. In this case, many invalid connections are established and resources are wasted.
When the client receives the acknowledgement packet sent by the server, it needs to send another packet to confirm whether to establish a connection, rather than sending a packet to establish a connection.
So is this really the case?
This is a bit of an abstract statement, but it really starts with the reliability of TCP:
In order to achieve reliable transmission, both sides of THE TCP communication must first maintain an initial serial number, which identifies the number of the current TCP data sent. The host that receives THE TCP data can determine which data is received and which is not.
The three-way handshake is to tell the other party the starting sequence number of the data and the expected confirmation sequence number of the next data, while one or two handshakes cannot achieve this operation. What about four handshakes? Yes, but it’s not necessary because it wastes resources.
The purpose of the three-way handshake is to synchronize the serial numbers of the two parties, which is used to ensure reliable transmission, which will be discussed below;
The details are as follows:
7. Four waves
7.1. What is waving?
Wave the meaning, for the human is the meaning of farewell, wave the hand does not take away a cloud.
For a TCP connection, when the TCP connection is disconnected, the two hosts to be disconnected send packets to each other. Each packet sent is called a wave.
7.2. What is four waves?
Assume that two hosts A and B have been connected to each other. Host A requests to disconnect the connection.
First wave: host A will send A message segment (serial number seq = p, mark A FIN = 1), and then enter A state of FIN – WAIT – 1, sending the logo after A FIN = 1, said I want to close the connection, the follow-up there will be no more data transmission to come over, but this time the host can still receive the host B sent data;
Second wave: After receiving the FIN packet sent by host A, host B sends an ACKNOWLEDGEMENT packet (ACK = 1, ACK = P +1) and enters the close-wait state. This process indicates that host B has received the CLOSE request sent by the CLOSE party.
At this time, host B is in the semi-closed state. Although host A will not send data to host A, host B may send data to host A.
After receiving the acknowledgement packet, host A enters the FIN-wait-2 state and waits for host B to send A connection release packet.
Third wave: After all data is sent, host B sends A packet to host A. Host B sends A FIN packet segment with flag bit FIN = 1, serial number SEQ = q, confirmation serial number P +1, and flag bit ACK = 1, and then enters the last-ACK state.
The fourth wave: After receiving the CLOSED packet segment, host A sends A acknowledgement packet segment with flag bit ACK = 1, acknowledgment sequence number ACK = q+1, and sequence number SEq = U +1. Then host A enters the time-wait state. At this TIME, the TCP connection is not released and will enter the CLOSED state after 2∗MSL(maximum packet segment life).
After receiving the acknowledgement packet, host B enters the closed state. At this point, the TCP connection ends. In this case, the closing time of the TCP connection on the host that receives the closed packet is faster.
Why wait for 2∗MSL before entering the closed state?
Maximum Segment Lifetime indicates the Maximum duration of a packet on the network. If the duration exceeds this period, the packet is discarded.
This value is set by the current host, and TCP allows you to set it to a different value;
There are two main functions:
(1) Ensure that the last ACK packet sent by the client can reach the server, because this ACK packet may be lost
In conclusion, although there were four waves, there were more than four operations;
A wave is just an informational message, and both the sender and the receiver have to respond;
Please see the flow chart:
7.3. Why four waves?
We have analyzed the four waves of TCP, so is there any question why the four waves are used when a TCP connection is broken?
Before looking at the four waves, let’s take a look at the nature of TCP connections. TCP connections are full-duplex;
Full-duplex communication means that both sides of a TCP connection can send and receive messages at the same time, as described above;
Because both sides of a TCP connection can send and receive messages at the same time, to close a connection, both sides of the connection must be closed, not just one.
Assume that host A and host B are TCP connection communication, then host A want to disconnect, then establish A connection to close the sending and receiving channels on both sides, it simply sends A message cannot be achieved, host A shut down their own channels and receiving channel, proceed with two packets to send, so host B in the same way, You have to send a packet twice to close your send and receive channels;
So at this point do you have a question?
When I send a packet, I just close my send and receive channels, so that TCP disconnection only needs two waves, right?
Of course, in theory, you can do this, as long as you send a message, you close your sending and receiving channels, but in practice, why not do this? Listen to me in detail;
If I want to disconnect host A, I send A TCP packet telling host B that I want to disconnect, and then close my own read and write channels.
So what’s the problem here?
(1) closed channel, said I don’t have the data to be sent to the host B, if the normal without packet loss in the network, can be written directly closed channel, but if sends the data packet loss, host B is not received the message, they think the current connection effective host B, will continue to read data from the host A, This can cause transmission problems;
(2) Close the read channel, means I do not read the data sent by host B, but at this time it is not sure whether host B has data to send, if there is data to be sent, but host A has closed the read channel, then transmission problems will also occur;
Since it is not possible to send a TCP packet to close the read and write channels, what can be done to make it more reasonable and more consistent with the reliable transmission characteristics of TCP?
So now we’re going to talk about the wisdom of our predecessors;
If you were to design a scheme to close a TCP connection, how would you design it?
First, let’s look at the TCP connection closing requirements:
(1) Both sides of the TCP connection should close the read and write channels; (2) Both sides of the TCP connection should ensure the normal transmission of data; (3) To meet the characteristics of TCP reliable transmission;
So now that we know the above requirements, what’s the best way to design them?
Since you can’t close the read and write channels at once, let’s close them separately.
Assume that two hosts A and B are connecting to each other through TCP. Host A must disconnect from each other.
(1) Host A sends A message to host B, telling host B that I want to close the write channel, indicating that I have no data to send to host B;
(2) In this case, host B has two situations. In one case, host B receives the packet from host A to close the write channel; in the other case, host B does not receive the packet from host A to close the write channel.
In the first case, when host B receives the packet and knows that host A does not send data, host B closes the read channel. In this case, host A needs to notify host A that it has received the packet. Otherwise, host A does not know whether host B has received the packet or not.
The second case: Host B did not receive A message, then host A received message, how to confirm whether the host B, of course, have to tell the host A to host B, but the host B did not receive A message, will not tell the host A whether to have received the message, so host A after sending the message for A period of time, if not received host B’s response, Then host A has to send the packet to host B again;
Based on the above two situations, we know that after host B receives the packet, it must tell host A that it has received the packet and can no longer send the packet to me.
Host A can close the write channel and host B can close the read channel.
After host A closes the write channel, host A does not close the read channel, while host B closes the read channel but does not close the write channel.
Let’s continue our analysis:
(1) After sending all the data, host B sends A message to host A, indicating that the data transmission has been completed and the write channel should be closed.
(2) At this time host A is also facing the above situation, one is received, one is not received, I will not say more here;
In the last case, host B sends A packet to host A, indicating that the write channel must be closed. After receiving the packet, host A must send A packet to host B, indicating that it has received the packet.
Therefore, to close the TCP connection properly, the two parties must send four packets, namely, the classic four waves.
I’m just showing you the general principles here, but it’s a lot more complicated than that, and it involves changes in the state machine, which you can see in the analysis of the four waves above 👆;
8. How can TCP connections be reliable?
TCP transmits data based on IP layer protocols, but IP layer protocols are unreliable transport protocols, so how does TCP achieve reliable transport based on unreliable IP layer protocols?
Let’s take a look at his real face!
8.1. Checksum
What is a checksum?
The TCP Checksum is an end-to-end Checksum calculated by the sender and then verified by the receiver. It is used to verify that the data is consistent before and after the transmission. If the receiver detects that the data is inconsistent, the data is discarded.
8.2 Confirm the response and serial number
Each packet has an ordered number, which is used for data transmission in order. After receiving the data, TCP will sort the data according to the number, and then hand the data to the application layer for processing.
After the host receives the TCP data, it must send a TCP data to confirm the response, which is used to indicate that I have received the transmitted data and the serial number is what, and tell the sending host to send the next time, the start serial number is what;
As shown in the figure:
8.3 Retransmission due to timeout
During TCP transmission, each time data is received, an acknowledgement reply is returned to the sender to confirm that the receiver has received the data.
However, in fact, the network is very unstable. After data is sent, the receiver may not receive the data, or the receiver may receive the data, but the sender may not receive the confirmation packet.
There are two main cases:
(1) The sending end sends the data, but due to the problem of delay, the receiving end does not receive the data; (2) The receiver receives the data, but the sender does not receive the acknowledgement packet due to network problems;
So how does TCP solve these two cases?
TCP introduces a mechanism called timeout retransmission. It can be understood that after the data is sent, if no response is received after a period of time, the data is resended to the other party.
The timeout retransmission mechanism is based on whether the acknowledgement packet is received. In other words, the above two cases can be classified as the same case, that is, whether the sender can receive the acknowledgement packet.
Because of the complexity of the network, if the timeout period is set to a fixed one, the timeout period may be too long or too short. The calculation of the timeout period is done by a rigorous algorithm.
For more information, see The calculation of timeout retransmission time
8.4 Flow control
(1) Window
From a transmission point of view, we want to do it as fast as possible, so can I send all the data at once?
Is obviously not possible, if small data may not have a problem, but if the large quantity data, whether or not his cache and to the other party receiving cache can be put down so much data, it is not serious, and it’s easy to go wrong, it does not conform to the TCP transmission reliability;
So how do we determine the size of the data to be transmitted?
The principle is very simple, the other side tells you, the recipient tells the sender how much data I can receive without making mistakes;
So how is TCP implemented?
Through the TCP window, this is mentioned in the TCP packet above 👆;
(2) Transmission efficiency
Imagine a situation where the receiver’s cache is full, and the application only takes one byte of data from the TCP cache at a time, and the receiver tells the sender, I can now receive one byte of data. Wouldn’t that be a problem?
Transmission of a byte of data, although there is no problem, but for the transmission efficiency, the efficiency is too low, only a transmission of a byte of data, will cause a waste of resources;
So how do you solve this problem?
By making the receiver wait for more time on one end and sending the acknowledgement packet when the remaining space in the receiving cache is large, and carrying the window information to tell the sender that the data of the corresponding window size can be sent, the problem of transmission efficiency can be avoided.
8.5. Congestion Control
The transmission of the network also needs to be controlled. Imagine that if the network can receive the transmission, we ignore the transmission to the network more than it can accept, then it will cause the network congestion;
Such as a highway, the allocation of the number of cars is limited, if is normal circumstances, the highway is able to run normally, not traffic jam, but if the vacation during the holiday season, a lot of cars poured on to the highway, and then can hold over highway traffic, will cause traffic jams, and the same is true of the network;
Congestion control is different from flow control. Congestion control acts on the network to control the efficiency of transmission in the network, while flow control acts on the receiver.
The common congestion control methods are: (1) slow start, congestion avoidance (2) fast retransmission, fast recovery
What are the algorithms for congestion control?
(1) Slow start algorithm
The sender maintains a state variable called CWND (congestion Window), and the size of this window changes dynamically according to the degree of network congestion.
The size of the congestion window depends on the congestion degree of the network, and changes dynamically according to the congestion degree. The sending window of the sender is equal to the size of the congestion window, and the flow control window of the receiver is also taken into account, so the sending window may be smaller than the congestion window.
The implementation principle of the slow start algorithm is to send a small packet to the network to detect the network congestion, and then dynamically double the increase until the network can accept the degree without causing network congestion.
(2) Congestion avoidance algorithm
The principle of the congestion avoidance algorithm is that each time a packet is sent, it only increases the window capacity by a small amount. For example, CWND is increased by 1, instead of doubling.
If the above slow-start algorithm doubles every time, it is easy to cause network congestion, so it cannot calculate the precise congestion window capacity.
Then we can combine the slow start algorithm with the congestion avoidance algorithm. If the slow start algorithm doubles, the network congestion appears, then the previous doubling is removed, and then the congestion avoidance algorithm is adopted to gradually increase the size of the network congestion window.
Generally, the degree of congestion is determined by whether an acknowledgement packet is received. Of course, no acknowledgement packet may be received in other cases, but other cases are treated as congestion.
The congestion avoidance algorithm can not completely avoid network congestion, but reduce the probability of network congestion.
(3) Fast retransmission algorithm
The realization principle of the fast retransmission algorithm is that the receiver is required to send a repeated acknowledgement packet immediately after receiving a disordered data, instead of sending an acknowledgement packet when sending data. In this way, the sender can know which packets are not received as soon as possible.
(4) Fast recovery algorithm
The realization principle of the fast recovery algorithm is that when the fast retransmission algorithm is carried out, the sender receives repeated confirmation packets, and at this time, the network is not considered to be congested, so the current transmission speed is halved, and the slow start algorithm is not used.
Because the network is not congested, the congestion avoidance algorithm is implemented to make the CWND increase slowly, so that the network transmission speed recovers quickly.
In conclusion, the above complaint can be summarized as the following flowchart:
Ssthresh: Slow start threshold
CWND: Congestion window
8.6 Connection Management
As for connection management, that’s the famous three-handshake and four-wave mechanism, which has been covered in great detail above 👆, so I won’t repeat it here;
What is the UDP header format?
UDP header format is not as complex as TCP, because UDP transmission is unreliable, there are not as many mechanisms;
Compared to TCP, UDP’s header is simple;
- (1) Source port number and target port number: each TCP segment contains the source port number and target port number, which are used to confirm the source of TCP data and the port number to be sent. The program on the computer obtains TCP packets by listening to the port number.
- (2) checksum: used for TCP data checksum, is one of the guarantees of TCP transmission reliability, filled by the sending end, the receiving end to check;
10. What are the characteristics of UDP connections?
10.1. Connectionless
There is no need to establish a connection before sending data, so there is no need to release the connection after sending, which is fast.
10.2. Delivery as far as possible
UDP transmission tries its best to transmit, but does not guarantee reliable transmission and does not need to maintain the state of the connection;
10.3. Packet Oriented
The UDP of the sender directly adds the header to the packet delivered by the application program and delivers it to the IP layer.
The packets delivered by the application are not split or merged. Whatever the application sends, the UDP packet is sent.
No congestion control
Even if the network is congested, congestion control is not implemented. For some real-time applications, network congestion does not need to be considered.
For example, broadcast, games and other software can allow network packet loss, but it requires high real-time transmission;
10.5 high flexibility
Support one to one, one to many, many to one, many to many communication;
10.6, the head overhead is small
The header is only 8 bytes, shorter than the TCP header of 20 bytes.
11. What are the scenarios in which UDP connections are used?
(1) Access to web pages or apps
Access to web pages and mobile apps is based on HTTP. The HTTP protocol is based on TCP, and the establishment of a connection requires multiple interactions. For an unstable network environment, the establishment of a connection takes a long time. However, since it is on the move, TCP may disconnect and reconnect, which is also time-consuming.
QUIC (Full name Quick UDP Internet Connections) is a communication protocol improved by Google based on UDP. It aims to reduce the latency of data transmission and connection creation, and control the bandwidth in both directions to avoid network congestion.
(2) Streaming media protocol
It is mainly used for the video transmission protocol of live broadcast, such as the QUIC protocol mentioned above;
(3) Real-time games
For games, real-time operation is very important, and for UDP, real-time is its characteristic;
(4) Internet of Things
TCP has a high maintenance cost for the Internet of Things, while UDP is very suitable for the real-time performance of the Internet of things. For example, the CoAP protocol is based on UDP.
Reference & Thanks
What is the transmission control protocol socket? TCP traffic control, Congestion Control, fast UDP Network Connection, simple understanding of the TCP TCP three handshake and four wave of the TCP why three handshake instead of two handshake Geek Time – Interesting Talk about Network protocols – Lecture 10
other
Android you have to learn about HTTP
About me
Brother Dei, if my post is helpful to you, please give me a like ️, and follow me on Github and my blog.