Summary review

  • The interviewer is more likely to be satisfied
  • 10 differences between arrow functions and normal functions
  • HTML blank Chinese character placeholder
  • I want to learn more about how to judge arrays
  • HTML Email preparation
  • Vue Router is a Router that we ignore

What will you learn from this passage

  • OSI 7-layer protocol and TCP/IP 4-layer protocol models
  • What is UDP
  • What is THE TCP protocol
  • TCP Connection Procedure
  • TCP two handshake can be completed, for sa need three times?
  • TCP disconnection process
  • Why does A enter the time-wait state and WAIT 2MSL before entering the CLOSED state?
  • ARQ (timeout retransmission mechanism) : Stops waiting for ARQ and continuous ARQ
  • Sliding window protocol
  • Congestion handling process
  • Congestion algorithm

1. Seven-layer Protocol (OSI)

1. Application layer

Function: It is an application that communicates with other computers, and it corresponds to the communication service of the application program. A variety of applications, including Web applications.

Protocols: DNS, FTP, HTTP, SMTP, TELNET, IRC, WHOIS

2. Presentation layer

Role: The main role of this layer is to define data formats and encryption.

3. The session layer

Function: Controls the conversational capabilities of an application. It defines the start, control, and end of a session, including the control and management of multiple two-way messages, so that applications can be notified when only part of a message has been completed.

PS: Protocols can be shared at the application layer, presentation layer, and session layer

4. Transport layer

Function: Select error recovery protocol and error free recovery protocol, reuse the input of different data streams on the same host, and reorder packets. It is the most critical layer and is solely responsible for overall data transmission and data control. Reliable transport services are provided for the upper three layers and reliable destination information is provided for the network layer. The units of data at this level are called data segments.

Protocol: TCP, UDP, etc

5. Network layer

Functions: Mainly responsible for address search and routing selection, network layer can also achieve blocking control, Internet, etc.

Protocol: IP, IPX, RIP, OSPF, etc

6. Data link layer

Role: Responsible for the physical layer of interconnection, communication and transmission between nodes; The functions of this layer include: physical address addressing, data framing, flow control, data error detection, retransmission, etc. At this level, the units of data are called frames.

Protocol: ARP, RARP, SDLC, HDLC, PPP, STP, frame relay, etc

7. Physical layer

Function: responsible for 0, 1 bit flow (0/1 sequence) and voltage high and low, roaming between the switch between the flicker defines the activation, maintenance, shut down communication endpoint between the mechanical characteristics, electrical characteristics, functional characteristics and process characteristics; This layer provides a physical medium for upper-layer protocols to transfer data. At this level, the units of data are called bits.

Typical specifications: EIA/TIA RS-232, EIA/TIA RS-449, V.35, RJ-45, FDDI token ring network, etc

Layer 2 and layer 4 protocols (TCP/IP network model)

Division of agreement

OSI:

  • Physical layer: EIA/TIA-232, EIA/TIA-499, V.35, V.24, RJ45, Ethernet, 802.3, 802.5, FDDI, NRZI, NRZ, B8ZS
  • Data link layer: Frame Relay, HDLC, PPP, IEEE 802.3/802.2, FDDI, ATM, IEEE 802.5/802.2
  • Network layer: IP, IPX, AppleTalk DDP
  • Transport layer: TCP, UDP, SPX
  • The session layer: RPC, SQL, NFS, NetBIOS, names, AppleTalk, ASP, DECnet, SCP
  • The presentation layer:, GIF, JPEG, TIFF PICT, ASCII, EBCDIC, encryption, MPEG, MIDI, HTML
  • Application layer: FTP, WWW, Telnet, NFS, SMTP, Gateway, SNMP

TCP/IP:

  • Data link layer: ARP,RARP
  • Network layer: IP,ICMP,IGMP
  • Transport layer: TCP,UDP,UGP
  • Application layer: Telnet,FTP,SMTP,SNMP.

For computers and network devices to communicate with each other, both need to be based on the same method. For example, how to establish the communication target, how to establish the communication connection, which side initiates the communication first, which language is used to communicate, when to disconnect, all these need to be confirmed in advance. Different devices and operating systems also need to confirm rules. We call these rules protocols.

TCP/IP refers to Internet protocols, including TCP, DUP, IP, FTP, HTTP, and SMTP.

The TCP/IP model is the foundation of the Internet. It is the general name of a series of network protocols. These protocols can be divided into four layers: link layer, network layer, transport layer and application layer.

  • Link layer: encapsulates and unencapsulates IP packets and sends and receives ARP/RARP packets.
  • The network layer is responsible for routing and sending packet packets to the target network or host.
  • Transport layer: groups and reorganizes packets and encapsulates them in TCP or UDP format.
  • Application layer: responsible for providing applications to users, such as HTTP, FTP, Telnet, DNS, SMTP, and so on.

In the network architecture, the establishment of network communication must be carried out in the peer layer of the communication parties, not interleaved. In the whole process of data transmission, the protocol header and protocol tail of the corresponding layer must be attached when the data passes through each layer at the sending end (only the protocol tail needs to be encapsulated at the data link layer), that is, the data must be encapsulated by protocol to identify the communication protocol used by the corresponding layer. TCP/IP has two representative transport layer protocols —-TCP and UDP.

Third, the UDP

UDP is the user data protocol. UDP is a transport layer protocol. Like TCP, it handles packets. UDP is connectionless. UDP does not provide data grouping and assembly, nor does it rearrange data, and does not guarantee whether data can be safely arrived.

1. Connectionless

UDP is a connectionless protocol. Before sending data, it does not need to establish a connection with a three-way handshake like TCP. If you want to send data, you can send it.

  • When sending data, the application layer sends the data to the transport layer’s UDP protocol, which simply adds a UDP identifier to the data.
  • When receiving data, the network sends it to the transport layer. UDP simply removes the IP header and sends it to the application layer.

2. Unreliability

Based on UDP, the interview is connectionless, there is no need to establish a connection, and you can send data whenever you want, so UDP is not reliable. The data is transferred as it is received, and no data backup is performed. The sender does not care whether the data is lost or whether the receiver receives the data correctly. Then the network goes up and down, but UDP doesn’t have network blocking, and if the data starts going, it starts going at a constant speed. Speed adjustments are not made even in bad network conditions, which can result in packet loss when network conditions are moderate.

3. Efficiency

UDP sends data at a constant speed, which is unreliable in poor network conditions. However, due to this unreliability, UDP is less complex than TCP. There is no need to ensure the accurate arrival of data, and there is no need to resend data, let alone consider the situation of data loss.

The UDP header contains:

  • Two hexadecimal port numbers: source port and target port
  • The length of the entire data packet
  • Data packet authentication and IPV4. This field is used to find errors in header information and data.

Therefore, DUP header overhead is small, only eight bytes, compared to TCP more than 20 bytes in data transmission, is very efficient.

4, a variety of transmission modes

UDP supports one-to-one, one-to-many, many-to-many, and many-to-one transmission modes. In other words, UDP provides unicast, multicast, broadcast transmission.

5. Packet oriented

During data transmission, UDP only represents the data in the UDP header at the application layer and sends the data to the IP network layer. In addition, the data is not grouped, split, or encapsulated and the boundary of the packet is reserved. Therefore, the application program needs to select an appropriate packet size.

Usage scenarios: Live broadcast, games, etc. UDP is likely to be used. If TCP is used, poor user experience may occur. TCP ensures the accuracy and correctness of the data. If the network fluctuates and the data is transmitted slowly, many things may have happened before the data is transmitted. The user’s picture is not up to date and cannot meet the user’s demands.

Four, TCP

When the computers are communicating, the two computers need to ensure that the data is reliable, safe and correct. For example, if you don’t want the page or mail you see to be complete without missing content when you check the mail or visit a web page, TCP is used here.

TCP is transmission control protocol (TCP). It is a transmission control protocol for connected, secure and basic byte stream.

1. Connection-oriented

Before sending data, the two parties need to establish a connection.

2. Only unicast transmission is supported

Each TCP transmission can be transmitted on only two endpoints and can be transmitted point-to-point only. Multicast and broadcast are not supported.

3. Byte oriented stream

Unlike UDP, TCP is transmitted one packet at a time. Instead, TCP is transmitted as a byte stream without preserving the packet boundary.

4. Reliability

To ensure transmission reliability, TCP assigns a serial number to each packet and ensures that packets transmitted to the receiver are arranged in sequence. The receiver sends back an acknowledgement (ACK) of the received packet, and resends the packet if the sender does not receive an acknowledgement at the confirmed round trip delay (RTT).

5. Provide blocking control

When the network is blocked, TCP reduces the number and rate of sending packets.

6. Full duplex communication

TCP allows communicating applications to send data at any time because TCP has caches at both ends to store temporary communication data. TCP can send a data segment immediately, or it can cache and wait to send a data segment in order to send more data at once (the maximum data segment depends on MSS).

Five, TCP connection (three handshake)

At the beginning, both the client and the server are in the closed state. Before the communication starts, the two sides will establish TCB. After the establishment, the client will prepare for sending, and the server is in the LISTEN state.

The first handshake

The client sends a request connection packet segment, which includes its own initial data communication code. After the request is SENT, the client enters the SYN-sent state.

2. Second handshake

The server receives a request to connect, and if it agrees to connect, the server replies with a message segment that also contains a data communication code of its own. After the packet is sent, the syn-received state is entered.

3. The third handshake

The segment of the packet received by the client to the server. An acknowledgement message is also sent to the server. The client establishes the ESTABLISHED state after sending the request, and the server establishes the ESTABLISHED state after receiving the request. The connection between the client and server is complete.

PS: Two handshakes, why three? This is to prevent the invalid connection request packet segment from being received by the server, resulting in an error. Imagine the following scenario. The client sends A connection request A but times out due to network reasons. In this case, TCP enables the timeout retransmission mechanism to send A connection request B again. At this point, the request reaches the server, the server responds, establishes the request, receives the data, and releases the connection. Suppose that the connection request A finally arrives at the server after both ends close, then the server thinks that the client needs to establish A TCP connection again, and responds to the request and enters the ESTABLISHED state. However, the client is actually in the CLOSED state, which causes the server to wait forever, resulting in a waste of resources.

PS: If either end of the connection is disconnected, TCP resends SYN packets and tries five times. SYN Flood attacks may occur during the connection establishment. In this case you can either lower the number of retries or simply reject the request if it cannot be processed.

PS: What is it? The meaning of existence? ISN is the origin of the byte number of the sender, allowing the receiver to generate a valid receive window. It’s dynamically random. The purpose of dynamic randomization is to improve security, so that a third party can not guess and thus be reset by a forged RST packet. For RST to succeed, the seq-number must be in the legitimate window of the other party. Since the ISN is dynamically random, the probability is very small. However, if the ISN is 0, the probability of being guessed will be larger. The ISN is dynamically random so that the byte sequence number of each TCP session is not repeated. If a TCP quintuple conflict occurs, the data of one session will not be mistaken for another session.

PS: Can the first three handshakes carry data? Why is that? The first three-way handshake cannot carry data, because the handshake is not complete. If the server caches the data and sends the packet after the handshake succeeds, the chances of SYN_FLOOD attack will be increased. If the attacker forges thousands of handshake packets, carrying 1K bytes of data, The receiver needs to open a large cache to store the data, resulting in memory consumption.

PS: Can the third handshake carry data? The third handshake can contain data, which can be done with the quick Open (TFO) technique. In fact, as long as the handshake protocol is involved, TFO can be used. The client and the server store the same cookie, and send the cookie when the next handshake to reduce RTT.

6. TCP disconnection (four handshake)

The first handshake

If the client considers that data transmission is complete, it needs to send a request to the server to release the connection. The client enters the FIN_WAIT_1 state.

2. Second handshake

Upon receiving the connection release request from the client, the server tells the application layer to release the TCP connection. Then the ACK is sent to the client and the CLOSE_WAIT state is entered. In this case, the connection between the client and the server is released and data from the client cannot be received. However, TCP is in full-duplex mode. So the server can also send data to the client.

3. The third handshake

The server will continue to send data if the data has not been sent at this point, after the data has been sent. The server sends a connection release request to the client. Then the server enters the LAST_ACK state.

4. The fourth handshake

The client is receiving a release request from the server. An acknowledgement request is sent to the server. At this time, the client enters the TIME_WAIT state, which lasts for 2MSL. If no server resends the request within this period, the client enters the CLOSED state. When the client receives an acknowledgement, it also enters the CLOSED state.

PS: Why does A enter the time-wait state and WAIT 2MSL before entering the CLOSED state? In order to ensure that B can receive A’s confirmation. If A enters the CLOSED state directly after sending the confirmation reply, if the confirmation reply does not arrive due to network problems, B cannot be CLOSED normally. After all four packets have been sent, you should be able to directly enter the shutdown state. However, the network is unreliable and the last ACK may be lost. Therefore, TIME_WAIT is used to retransmit ACK packets to prevent the last packet from being lost. If the server does not receive the ACK, it will continuously send the FIN. If it does not receive the FIN from the server within 2MSL, the client will enter the closed state. The client has an active timer that starts when it sends an ACK and enters TIME_WAIT, and restarts if it receives a FIN within 2MSL, which is the maximum lifetime of a fragment in the network.

PS: Why do you need three handshakes to connect, but four handshakes to close? When receiving a SYN connection request from a client, the server sends a SYN+ACK packet. ACK packets are used to reply and SYN packets are used for synchronization. However, when the server receives a FIN packet from the server, the connection may not be closed immediately. Therefore, the server can only reply an ACK packet to inform the client that the FIN has been received by the server. The server sends FIN packets only after all packets are sent. Therefore, FIN and ACK packets cannot be sent at the same time. Therefore, a four-step handshake is required.

PS: What if the link has been established, but the customer service suddenly fails? TCP has a keep alive, obviously if there any problems when the end of the service, the server can’t have been waiting for, has been waiting for will waste resources, the server every time after receive a client request to reset the keep alive, the usual setup time is 2 hours, if two hours can not receive clients any data, the server will send a message, exploration The packet is not sent for 10 consecutive times. If no response is received from the client, the server considers that the client is faulty and automatically enters the management state.

Vii. ARQ Protocol (Timeout retransmission Mechanism)

The ARQ protocol is the timeout retransmission mechanism. It is one of the error correction protocols of the transport layer and the data link layer in the OSI model to ensure the correct arrival of data through acknowledgement and timeout mechanism, which realizes reliable information transmission on the basis of unreliable services. ARQ includes stop wait ARQ and continuous ARQ, error detection, positive confirmation, timeout retransmission and negative confirmation and retransmission mechanisms.

1. Stop waiting for ARQ

TCP is a full-duplex protocol. After a connection is established, both parties are the sender and receiver. In the case of only one side sending and one side receiving. Let’s assume that the sender is A and the receiver is B.

1.1 No errors

User A sends packet M1 and suspends the packet until user B confirms the packet. B receives M1 and sends an acknowledgement to A. After receiving the confirmation, A sends the next packet M2.

1.2 Error situation

If there is an error in the sending process of A or B checks that there is an error in receiving M1 (in this case, A will save the copy of the group before sending and then clear the copy only after receiving confirmation), B will discard M1 when detecting an error and do nothing else. In the case of error, as long as A has not received confirmation for more than A period of time, the task will be wrong, and it will resend the packet that was just sent, that is, timeout retransmission.

Timeout retransmission means that after A sends A packet, A timeout timer is set. If the timeout period is longer than this, the packet will be sent again. Of course, if the confirmation message is received within the time, it will not be resend. In general, the timeout timer should be longer than one RTT time.

1.3 ACK Delays and Loss

If A sends group M1 to B, and B sends A confirmation message after receiving M1, but the confirmation message is lost due to network problems, then A does not receive the confirmation message from B during the timeout retransmission period. A will retransmit M1 packets. Then B will do two actions. ①B will discard the M1 packet and not deliver it to the upper layer because it has received the M1 packet before. ② Send confirmation information to USER A.

The stop waiting ARQ protocol has the advantage of simplicity, but there is also a very serious determination that channel utilization is too low. Channel utilization U = TD/(TD + RTT + TA). Td sending delay, RTT is a round trip time, Ta is the receiving delay

2. Send ARQ protocol continuously

Due to the low utilization rate of ARQ channel, continuous ARQ protocol should be used for improvement. This protocol sends a series of packets. Then wait for the ACK of these packets.

The continuous ARQ protocol states that the sender slides the send window forward one block for each confirmation received. When the sender receives an acknowledgement of the first group, it moves the send window forward one group. If you already sent the first five groups, you can now send the sixth group in the window.

2.1 Cumulative Confirmation

The recipient generally adopts the cumulative confirmation method. This means that recipients do not have to send acknowledgements for each packet they receive. Instead, after several packets are received, an acknowledgement is sent for the last one that arrives in sequence. If this packet acknowledgement message is received, all packets up to this point have been correctly received.

The advantage of cumulative validation is that it is easy to implement and does not need to be retransmitted even if the validation is lost. However, the disadvantage is that it cannot correctly reflect to the sender the information of all the groups that the receiver has correctly received. For example, if the sender sends the first five packets and the middle third packet is missing, the receiver can only send an acknowledgement for the first two packets. Since the whereabouts of the last three groups are not known, the last three groups can only be retransmitted once. This mechanism is called go-back-n, which means that the N groups that have been sent need to be retransmitted again.

Sliding window protocol

The sliding window protocol maintains a sliding window between the sender and the receiver, with the sender as the sending window and the receiver as the receiving window, and this window can slide forward as time changes. It allows the sender to send multiple packets without waiting for confirmation. TCP’s sliding window is in bytes.

There are four concepts in the send window:

  • Data sent and acknowledged (outside the send window and send buffer)
  • Data sent but not acknowledged (within the send window)
  • Data allowed to be sent but not yet sent (within the send window)
  • There are also four concepts in the receiving window of data that are temporarily not allowed to be sent in the buffer outside the sending window:
  • Data that has been sent for confirmation and delivery to the host (outside the receive window and receive buffer)
  • Unordered received data (within the receive window)
  • Data allowed to receive (within the receive window)
  • Data not allowed to be received (within the send window)

Sliding Windows are an important concept that helps TCP implement flow control. The receiver informs the sender of how much data can be sent through the packet to ensure that the receiver can receive data in time and prevent the sender from sending data all the time when the receiver’s bandwidth is full.

Congestion control

Network congestion refers to the phenomenon that the number of packets arriving at a certain part of the communication network is too large for the network to handle. As a result, the performance of this part or the entire network deteriorates. In serious cases, network communication services may be stopped, that is, deadlock occurs. Congestion control is a mechanism to deal with network congestion.

Congestion control is an algorithm used to adjust the number of packets sent at a single time on a TCP connection by increasing or decreasing the number of packets sent at a single time to approximate the capacity of the current network. If the amount of a single transmission is 1, this protocol degrades to a stop-wait protocol. The amount of a single transmission is measured in bytes. However, if we assume that each TCP transmission is based on the maximum packet segment (MSS), then we can also use the number of packets as a unit of the number of packets sent in a single session. Therefore, we sometimes say that the number of bytes sent in a single session increases by 1, which is equal to the number of bytes of the maximum packet segment.

1. Congestion control algorithm

Congestion control assumes that the loss of packets is caused by busy networks. There are three types of congestion control actions that correspond to the conditions felt by the source host:

  1. A new confirmation was received. It indicates that the current single transmission volume is smaller than the carrying capacity of the network. At this point, the single transmission can be increased. If the current single send volume is less than the slow start threshold (SSthreash), the single send volume is doubled (multiplied by 2), that is, exponential growth; Otherwise, the single sending quantity increases by 1, that is, linear growth.

  2. Three confirmations of the same grouping, i.e., three duplicate confirmations, were received. The network is a little busy. At this point, the single transmission volume is halved, and the slow start threshold (SSTHreash) is approximately equal to the single transmission volume, entering the linear growth stage.

  3. Confirmation of a group does not arrive late, that is, time out. The network is busier than in the previous case. At this point, the slow start threshold = single send quantity ÷2, single send quantity =1, and the slow start stage (exponential growth stage) is entered.

2. Four stages of congestion control

2.1 slow start

Slow start is a blocking control mechanism used by the Transmission Control Protocol (TCP). Slow start is also called exponential growth. The slow-start algorithm works by observing that the rate at which new packets enter the network should be the same as the rate at which the other end returns confirmation.

Slow starts add another window to the sender’s TCP: congestion window, denoted as CWND. The congestion window is the flow control used by the sender, and the notification window is the flow control used by the receiver. The algorithm is described as follows:

  1. When a TCP connection is established with a host on another network, the congestion window is initialized to 1 packet segment (that is, the size of the packet segment advertised on the other end).

  2. The congestion window doubles with each ACK received (CWND is in bytes, but slow start is increased by the size of the packet segment). It’s an exponential growth relationship.

  3. The sender takes the minimum value in the congestion window and the notification window as the upper limit of sending.

2.2 Congestion Avoidance

The slow start algorithm is a method of initiating a flow of data over a connection, and its exponential growth quickly causes the network to become congested, because at some point the Internet’s capacity may be reached, and intermediate routers start dropping packets. Congestion avoidance algorithm is a method to deal with lost packets. There are two kinds of indication of packet loss: timeout has occurred and duplicate acknowledgements have been received. Timeout occurs. The source host does not receive an ACK from the destination host for a packet when the timeout timer is over. Received Duplicate acknowledgement: Indicates that three or more CONSECUTIVE ACK packets are received for a group before the timeout timer of the source host overflows.

If a timeout occurs or three times of repeated acknowledgement is received, packet loss occurs. In this case, congestion occurs on the network, and congestion control needs to be performed. The algorithm is described as follows:

  1. Set the slow start threshold (SSthreash) to half of the current window (minimum CWND and notification window size, but minimum 2 messages).

  2. If congestion is caused by timeout, the congestion window (CWMD) is set to 1 to enter the slow start process. If the congestion is caused by repeated confirmation, fast retransmission and quick recovery are started.

  3. After entering the slow start phase, the congestion window increases exponentially. If the congestion window is greater than the slow start threshold (SSthreash), the congestion avoidance algorithm is executed. When the congestion avoidance algorithm is executed, the congestion window size no longer doubles, but linearly increases, because the slow start threshold (SSthreash) already exists.

Congestion avoidance algorithm and slow start algorithm are two independent algorithms with different purposes. But when congestion occurs, we want to reduce the rate at which packets are transferred into the network, so we can call slow start to do this. In practice these two algorithms are usually implemented together. TCPReno version in 1990 added “fast retransmission” algorithm and “fast recovery” algorithm to avoid the phenomenon of greatly reducing the size of the sending window caused by “slow start” algorithm when the network congestion is not serious enough.

2.3 Fast Retransmission

When receiving an out-of-order packet segment, the destination host generates a duplicate ACK immediately. This duplicate ACK should not be delayed. The purpose of this ACK is to let the source host know that the destination host has received an out-of-order packet segment and tell the source host the sequence number that it expects to receive.

Since we do not know whether a duplicate ACK is caused by a missing segment or just a reordering of several segments, we wait for a small number of duplicate ACKS to arrive. If this is just a reordering of segments, only one or two duplicate ACKS may be generated before the reordered segments are processed and a new ACK is generated. If three or more repeated ACKS are received in a series, it is highly likely that a packet segment is lost and the fast retransmission process is as follows:

1. Set the slow start threshold (SSthREASh) to half of the current congestion window (CWND), and set the congestion window (CWND) to the slow start threshold (SSthreash) plus three times the packet segment size. Retransmits the lost data packet segment without waiting for the timeout timer to overflow.

2. Each time it receives another duplicate ACK, CWND increases the size of a packet segment and sends a packet (if the new CWND allows sending). Another repeated ACK is received, indicating that a packet transmitted in the network has reached the destination host. The network can accommodate another packet, so CWND increases the size of a packet segment and sends a packet.

2.4 Quick Recovery

After the lost packet is sent through the fast retransmission process and accepted by the destination host, the destination host does not send a duplicate ACK to inform the source host to send the lost packet. Instead, the destination host sends an ACK to inform the source host to send the new packet. This ACK should be an acknowledgement of the retransmission packet within one round trip time after the retransmission and of all intermediate packet segments between the lost packet and the first duplicate ACK received. At this point, in order to quickly restore to a higher transmission speed, the fast recovery stage will be entered. The algorithm is described as follows:

  1. When confirming the arrival of an ACK for new data, set the Congestion window (CWND) to the slow start threshold (SSthreash) (the value set in Step 1 when fast retransmission occurs) and enter the congestion avoidance process.

reference

  • Compare TCP and UDP
  • TCP continuous ARQ protocol and sliding window protocol
  • TCP congestion processing in detail