Completed: Protocol model (1-4), application layer (5-35), transport layer (36-65). To be completed: Network layer (66-79), data link layer

Computer network model

1. What are the five layer Internet Protocol stack and the seven layer OSI (Open System Interconnections) reference models respectively?

Layer 5: Application layer, Transport layer, network layer, data link layer, and Physical layer layer 7: Application layer, presentation layer, session layer, transport layer, network layer, data link layer, and physical layer

2. Why are layered architectures considered? What are the advantages and disadvantages?

Layered advantage

Protocol layering has the advantage of being conceptual and structured. Layering provides a structured way to discuss system components. Modularity makes it easier to update system components.

Delamination defects

A potential disadvantage of layering is that one layer may redundancy functions of lower layers. For example, many protocol stacks provide error recovery both on a per link basis and end-to-end basis. A second potential disadvantage is that the functionality of one layer may require information (such as timestamp values) that is only present in another layer, which defeats the goal of hierarchy separation.

3. What are the functions and functions of each layer? What is the transfer data type for each layer?

  1. Application layer (data) : Determine the nature of communication between processes on two communication endpoints to meet user needs and provide network and user applications

  2. Presentation layer (data) : Addresses the syntactic representation of supporting information, such as encryption and decryption

  3. Session layer (data) : Provides mechanisms for establishing and maintaining communication between applications, including access authentication and session management, such as server authentication of user logins

  4. Transport layer (segment) : provides logical communication between application processes, realizes data communication between user processes on different hosts of the network, reliable and unreliable transmission, error detection at the transport layer, flow control, etc

  5. Network layer (packet) : provides logical communication between hosts, logical address (IP), route selection, and data transmission from source to destination

  6. Data link layer (frame) : encapsulates upper-layer data into frames, accesses media with MAC addresses, and detects and corrects errors

  7. Physical layer (bitstream) : transmission of bitstreams between devices, physical interfaces, electrical characteristics, etc

4. Which data exchange devices are used in each layer?

  1. Gateway: application layer, transport layer (On the transport layer to achieve network interconnection, gateway is the most complex network interconnection device, only used for the interconnection of two networks with different high-level protocols. The structure of a gateway is similar to that of a router, except for the interconnection layer. Gateways can be used for both wan and LAN interconnection.)

  2. Router: Network layer (routing, storage and forwarding)

  3. Switch: data link layer and network layer (identify the MAC address information in packets, forward according to the MAC address, and record these MAC addresses and corresponding ports in an internal address table)

  4. Bridge: Data link layer (connecting two Lans to forward frames by MAC address)

  5. Hub: Physical layer (pure hardware device used to connect network terminals such as computers

  6. Repeater: physical layer (regeneration and retiming of network signals at the bit level, allowing them to travel longer distances over the network)

The application layer

5. What are the dominant application architectures? What are their strengths and weaknesses?

C/S mode (client/server), P2P mode (peer), mixed mode.

6. How do processes on the same host and different hosts communicate with each other?

Interprocess communication on the same host is determined by the operating system. Interprocess communication between different hosts is accomplished through message/packet exchange.

7, What is a socket? What is a socket used for?

A socket is an API, the interface between an application and the network. Processes are analogous to a house, and its sockets are analogous to its doors. When a process wants to send a message to another process on another host, it pushes the message out of the gate.

How do processes address?

Host address (IP) + identifier (port number) of the specified receiving process on the destination host

9. What do application layer protocols define?

1. Type of packet exchanged: request packet or corresponding packet 2. Syntax of packet type and description of each field 3. Field semantics 4. Determine when and how a process sends packets and the corresponding rules for packets

10. What is the application layer protocol for web applications? What is stateless protocol?

HTTP: Hypertext transfer protocol. The so-called stateless protocol is where the server sends the requested file to the client without storing any status information about the client. HTTP is a stateless protocol. Web applications use cookie technology as a supplement to make up for the possible defects of HTTP stateless protocol.

What is a URL? What’s the format?

URL: Uniform resource locator. Used to address objects in a Web page, referring to the corresponding object. A Web page consists of multiple objects, including an HTML base file and reference objects. Reference objects are identified by URL addresses.

URL format: Protocol :// Host name/pathname (www.baidu.com/documents) (…

12. What are persistent and non-persistent connections? What are the conceptual premises? What are the pros and cons? Which connection mode does HTTP use?

Consider whether each request/response is sent over a separate TCP connection, or whether all requests/responses are sent over the same TCP connection.

The former is non-continuous connection (serial/parallel) and the latter is continuous connection (with/without flow).

Prerequisite: C/S mode, TCP??

The HTTP protocol can use both persistent and non-persistent connections. Continuous connections with pipelined are used by default.

Pipelined continuous connection: AN HTTP client sends a request when it encounters an object reference, rather than waiting for the previous request to respond.

13. HTTP packet type and corresponding packet format

HTTP request packet format :URL contains only the path name, and the host name is hosts in the header lineFormat of the HTTP response packet

14. What are the common methods of HTTP request packets? What are their functions?

HTTP1.1:GET, POST, HEAD, PUT, DELETE

GET: request object POST: upload form, generally placed in the entity body HEAD: sends a request but does not return the request object, which is often used for debugging and tracing. PUT: allows users to upload objects to the server. DELETE: allows users to DELETE some objects from the server

What is the difference between a GET request and a POST request?

1. The first difference: at the HTTP packet level, GET puts the request information in the URL, and the request information is separated from the URL by question marks. The request information is in the form of key-value pairs. A GET request is placed in a URL. The URL itself has no length limit, but the browser has a length limit. POST puts the request information into the message format, so there is no length limit on the URL. 2. Second difference: Database level, GET in line with the idempotence (one or more of the database is consistent with the result of the operation, and thinks that conform to the idempotence) and security (no change to the operation of the database of the database data, is considered to meet the security, the GET operation is doing the query operation, so as not to change the inside of the database data), POST does not comply (POST requests are neither idempotent nor secure, POST requests change the data in the database because they are submitted to the database, and the result of each POST request may be different, because POST requests are applied to the upper LEVEL URL, and new resources are added with each request). 3. Third difference: GET requests can be cached (saved to the browser’s browser history) and stored (GET request urls can be saved as browser bookmarks), whereas POST requests cannot.

16. What are the common status codes and related phrases in HTTP response packets? What do they mean?

The HTTP status code

1xx, indicating that the request has been received and processing continues. 2xx: success: Indicates that the request is successfully received, understood, or accepted. 3xx, redirect, further action must be taken to complete the request. 4xx, client error, request has syntax error or request cannot be implemented. 5xx, server side error, server failed to implement a valid request.

HTTP related phrases

301 Moved Permanently: The requested object is Permanently Moved Permanently, and the new URL is defined in the Location of the first line of the response packet. 400 Bad Request: common error code. 401 Unauthorized: The request is Unauthorized 403 Forbidden: The Server receives a request but rejects the service 404 Not Found: The requested document is Not on the Server 500 Internal Server Error: An unexpected Error occurs on the Server. 503 Server Unaviable: The Server cannot process the client request 505 HTTP Version Not Supported: the Server does Not support the HTTP Version used in the request packet

17. Briefly describe the process of HTTP request response.

1. The client connects to the Web server. An HTTP client is usually the HTTP port between the browser and the Web server. The default port number is 80. 2. The client sends an HTTP request to the Web server through the TCP socket. 3. The server receives the request from the client and returns an HTTP response. The Web server parses the request to locate the requested resource, and the server writes a copy of the resource to a TCP socket for the client to read. 4. Then release the TCP connection. If the connection mode is set to CLOSE, the server actively closes the TCP connection, and the client passively closes and releases the TCP connection. If the connection mode is Keep Alive, the connection will remain open for a period of time during which requests can continue to be received. 5. The client browser then parses the HTML content. And parsing, the client browser first parsing the status line, check to see if the table name request status code of success, and then parse every response headers, response headers to inform the following number of bytes of HTML documents and document character set, the client browser to read HTML response data, according to the HTML syntax for formatting, and explanation in the browser window.

What is a cookie? What are its four main components?

Cookie content

Cookie technology allows sites to keep track of users. Because HTTP is stateless, it means that you have to enter your account and password every time you need to access a login page. Cookie technology is the solution of the client. It is the special information sent to the client by the server and stored in the client in the form of text. Every time the client sends a request to the server will bring the special information, such as when customers use a browser to access support cookies website, need to enter account and password mentioned to the server, the server to the client response to hypertext and will return to the personal information personal information is not stored in the body of the HTTP response body, Instead, it stores the information in HTTP headers, which the browser stores in a unified location when the client receives the response from the server. When the client requests again, it sends the Cookie to the server and stores the Cookie information in the HTTP response header. After receiving the request from the client browser, the server will analyze the Cookie information stored in the request header to obtain the client-specific information, so as to dynamically generate the content corresponding to the client.

Cookie technology has four components

1. The header line of a cookie in the HTTP response packet is set-cookie field 2. Cookie field 3 is the header line of a cookie in an HTTP request packet. A cookie file is kept in the client system and is managed by the user’s browser. A back-end database located at a Web site

19. Briefly describe the cookie setting and sending process.

Cookie setting and sending process: First, the client sends an Http Requert request to the server. In the second step, the server sends an Http Response and sets the Cookie header to the client. Third, the client sends Http Request and Cookie header requests to the server. Fourth, the server sends an Http Response request to the client.

20. What is Session? How does Session make use of cookies?

Concept of the Session

HTTP is a stateless protocol, so when the server needs to record the user status, it needs to use some mechanism to identify the specific user, which is Session. In a typical scenario like a shopping cart, when you click the order button, HTTP is stateless, so you don’t know which user is doing it, so the server creates a special Session for a particular user, identifies that user, and keeps track of that user, so it knows how many books are in the shopping cart. This Session is stored on the server and has a unique identifier. There are many ways to save sessions on the server, including memory, databases, and files. Session transfer should also be considered when clustering. In large websites, there is usually a special cluster of Session servers to store user sessions. In this case, Session information is stored in memory, and some caching services such as Memcached are used to store sessions.

The session with a cookie

How does the server identify a particular client? This is where Cookie comes in. Each TIME an HTTP request is made, the client sends a Cookie to the server. In fact, most applications use cookies to realize Session tracking. When a Session is created for the first time, the server tells the client in the HTTP protocol that it needs to record a Session ID in the Cookie, and sends this Session ID to the server for each subsequent request. I knew who you were. Someone asked, what if cookies are disabled on the client’s browser? In this case, session tracking is typically done using a technique called URL rewriting, where each HTTP interaction is followed by a parameter such as SID = XXXXX that the server identifies the user.

21. What is the difference between cookies and sessions? Describe the typical scenarios of the two scenarios.

Both are common session tracing techniques. 1. Cookie data is stored on the client’s browser, and session data is stored on the server. 2. Cookies are not very secure. Others can analyze cookies stored locally and cheat cookies. 3. The session is stored on the server for a certain period of time. Cookies should be used to reduce server performance when the number of accesses increases.

Typical scenario: login website, enter the user name and password login, the next day to open many cases directly opened. One mechanism used at this point is cookies. Session One scenario is a shopping cart, after adding items, the client can know which items are added, and the server can tell what is added, so it also needs to store some information so it uses the session.

22. What is a Web cache? How do I determine if cached objects are up to date?

Proxy server, the network entity that satisfies HTTP requests on behalf of the original server. Client requests are first sent to the proxy server, which responds directly to the request content without further sending requests up to the original server. Can achieve fast response, reduce the response time of requests, but there is a need to solve the problem of old and new cache data.

Conditional GET method: The cache server sends a request packet to the initial server using the GET method, and contains the if-Modified-since field in the header line of the packet.

What is a socket?

Socket is the intermediate software abstraction layer of communication between application layer and TCP/IP protocol family. It is a group of interfaces. In the design mode, Socket is actually a facade mode, it hides the complex TCP/IP protocol family behind the Socket interface, for the user, a simple set of interfaces is all, let the Socket to organize data to conform to the specified protocol.

24. What is HTTPS? How does it relate to and differ from HTTP?

Hyper Text Transfer Protocol Secure (HTTPS) is a hypertext Transfer security Protocol. HTTPS is a transport protocol for secure communication on computer networks. HTTP includes IP, TCP, and HTTP. HTTPS is a secure version of HTTP, with SSL (TLS after SSL3.0) added to HTTP. SSL(Securiy Sockets Layer) is a security protocol that provides security and data integrity for network communication. SSL is an API of the operating system between TCP and each application Layer. After SSL3.0, it was changed to TLS. Authentication and data encryption are used to ensure network communication security and data integrity.

The difference between

1. You need to apply for a certificate from the CA for HTTPS, but not for HTTP. 2. Hyper Text Transfer Protocol Secure (HTTPS) is a Secure SSL encryption Protocol for ciphertext transmission. Hyper Text Transfer Protocol (HTTP) is a hypertext Transfer Protocol. It is plaintext transmission. The default HTTPS port is 443, and the default HTTP port is 80. 4.HTTPS=HTTP+ encryption + Authentication + Integrity protection. SSL is stateful, while HTTP connections are stateless.

25. Describe the HTTPS handshake process.

Https uses certificates and encryption to ensure data security. Https data transfer process. Before data transfer, Https shakes hands with the web server and web browser to confirm their encrypted passwords. The specific process is as follows:

1. The Web browser sends the information about the supported encryption algorithm to the website server. 2. The server selects an encryption algorithm supported by the browser and sends the authentication information back to the browser in the form of a certificate. 3. After receiving the certificate, the browser verifies the validity of the certificate. If the certificate is trusted by the browser, the certificate is displayed in the address bar. After the certificate is trusted, the Web browser randomly generates a series of passwords and encrypts them using the public key in the certificate. Then, the web browser uses the hash algorithm to shake message B and generates a random number to encrypt the message, and sends the previously generated information back to the server. 4. After receiving the message from the Web browser, the server uses the private key to decrypt the password, decrypts the handshake message from the Web browser, verifies whether the hash is the same as that of the Web browser, and encrypts the new handshake response message to send back to the browser. 5. The Web browser decrypts the hashed handshake response message from the server and checks the truth of the message. If the message is the same as the message sent by the server, after the handshake, the server and browser encrypt the random and symmetric passwords generated by the previous browser, and then exchange data.

What is DNS? What is its namespace (domain name naming convention)?

Domain Name System (DNS) a directory service that translates host names into IP addresses, provides host server aliases, and load balancing (one DNS Name can correspond to multiple IP addresses). The system is a distributed database implemented by hierarchical DNS servers (root DNS server, top-level DNS server (common three types: organization domain, country domain, reverse domain in-Addr.arpa), authoritative DNS server, local DNS server (generally does not belong to the first layer of the hierarchy by default, but is very important). An application-layer protocol that enables hosts to query distributed databases. The DNS protocol runs on top of UDP and uses port 53.

Name space: fully qualified domain name (FQDN) host name + domain name (generally the default host name WWW device as their own server :www.baidu.com) (after the subdomain name application, you can customize the network layer under the subdomain name: www.bbs.xidian.edu.cn)

27. Describe the principles and process of DNS resolution.

Static resolution: Stores frequently accessed IP addresses on the local PC and accesses local files to obtain response IP addresses. This is called static resolution. For example, in Linux, some host names and IP address mappings are stored in /etc/hosts, a static resolution file. Dynamic resolution: DNS resolution. 1. The client holds the domain name and asks the DNS to query the IP address: Forward lookup (hosts perform iterative query on the local DNS server, and the local DNS server performs recursive query on DNS servers at other layers) 2. The client takes the IP and asks DNS to query the corresponding full name domain name: reverse lookup (how to do this?)

Iterative and recursive queries. (Local ->root ->TLD -> authoritative) If DNS caching is considered, the local server can cache the IP address of the TLD domain name server and access the TLD domain name server directly instead of using the root domain name server to query the IP address of the TLD domain name server. In most cases, the DNS accesses the browser cache first, then the local hosts file, then the route cache, and finally the DNS server.

28. Describe the DNS cache.

DNS cache means that when a DNS server receives a mapping group of a host name and IP address, it stores the mapping locally to facilitate client query and speed up response. Since mappings between host names and IP addresses are not permanent, DNS servers often discard cached mappings for two days.

29. Describe the concepts of FTP.

File Transfer Protocol (FTP). As a transfer protocol for shared files on the network, FTP is widely used in network applications. The goal of FTP is to improve the sharing of files and transfer data reliably and efficiently. Runs on TCP 20 (transmission data), 21 (transmission control information) ports. However, whether 20 is used as the data transfer port depends on the transfer mode used by FTP. If active mode is used, the data transfer port is 20. If passive mode is used, the final port to be used must be negotiated between the server and client.

30. What are the two working modes of FTP?

Active mode :PORT mode

In active mode, the client randomly opens port N(N>1024) to initiate a connection to port P (port 21) of the server. Meanwhile, the client opens port N+1 to listen and sends the “port N+1” (port H1, H2, H3, H4, P1,p2) command to the server. The server actively connects from its own data port (20) to the data port (N+1) specified by the client. The FTP client simply tells the server its port number and lets the server connect to the port specified by the client. For the client’s firewall, this is the connection from outside to inside and may be blocked.

Passive :PASV mode

In passive mode, the command connection and data connection are initiated by the client, which solves the problem that the connection from the server to the data port of the client is filtered by the firewall. In passive mode, when an FTP connection is enabled, the client opens two arbitrary local ports (N > 1024 and N+1). The first port connects to port 21 of the server and submits the PASV command. The server will then open an arbitrary port (P > 1024) and return something like “227 entering passive mode (127,0,0,1,4,18)”. It returns information starting with 227, with six numbers separated by commas in parentheses. The first four refer to the address of the server, and the last two are multiplied by 256 plus the last number. This is the port that the FTP server opens for data transfer. If 227 entering passive mode (H1, H2, H3, H4, P1,p2) then port number is P1 *256+ P2 and IP address is H1.h2.h3.h4. This means that there is a port open on the server. After receiving the command to obtain the port number, the client connects to port P of the server through port N+1 and transfers data between the two ports.

The difference between the two models

Due to the firewall mechanism, active mode is bad for client management and passive mode is bad for server management. In the active case, the client may be blocked by the firewall. In passive mode, the server firewall may block the client’s active access to the server data port.

FTP various commands and response codes.

32. Brief description of FTP resumable data transfer.

In fact, the principle of FTP breakpoint continuation is very simple, can be divided into breakpoint download and upload. The implementation steps of the client are as follows:

1, send the “REST + local file length” command to the server to tell the server that the client is going to break the download. At this point, the server does not know which file the client is downloading; To implement resumable FTP, the FTP server must support the REST directive, which is defined in THE FTP protocol text RFC959, but is not required by the FTP server. In general, you can experiment with the REST 100 command before downloading. If this command is executed correctly, the server supports FTP resumable breakpoints. The number following REST indicates where the file was downloaded from, and REST 0 indicates where the file was downloaded from. The REST command does not perform the download itself; you still need to use the RETR command to perform the download. 2. Send the command “RETR + file name” to the server to inform the server of the file name to be downloaded. At this point, the server starts to locate the file pointer to read the file and send data. 3. The client locates the local file pointer (the end of the file); 4, both ends of the preparatory work is done, the client create socket, in passive or non-passive way to establish a data channel, circular call RECV to receive data and add to the local file;

Ii. Upload:

1. Obtain the same file size of the file to be uploaded on the server and locally. 2, send “APPE + file name” to the server to inform the server that the next data sent to you from the data channel should be appended to the end of the file. 4. Read data from the file pointer and send it.

33. Brief anonymous FTP.

Anonymous FTP is the anonymous file transfer protocol. For connections to remote computers, these bai machines are connected as anonymous or customer users to transfer common files to the user’s local computer. Before using FTP, you must log in to the remote host and obtain the permission to upload or download files. That is, to transfer files to any computer, you must have the appropriate authorization for that computer. Anonymous FTP was created to solve this problem. A large proportion of FTP servers on the Internet are called “anonymous” FTP servers. The purpose of this type of server is to provide file copy services to the public. Users are not required to register with this server in advance, nor do they need to obtain authorization from the FTP server. Anonymous file transfer allows a user to connect to a remote host and copy files from the remote host anonymously, without having to be a registered user of the remote host. The service provider creates a public account (usually anonymous) on its FTP server and grants the account access to the public directory.

34. Briefly describe DHCP.

Dynamic host configuration protocol. How does a host get an IP address from a DHCP server? Because no IP address is assigned to the host, the host broadcasts 0.0.0.0IP address to send the request packet.

35. Basic parameters of port numbers.

The port number is 16 bits of data. The port number ranges from 0 to 65535, where 0-1023 is the weekly port number.

The transport layer

36. Describe the concepts and differences between TCP and UDP.

TCP: Transmission control protocol. TCP is a character-oriented connection-oriented protocol. The total data generated by an application may not be related to the individual IP datagrams that are actually sent. UDP: User datagram protocol. It is a simple datagram oriented connectionless transport layer protocol. Each output operation of a process produces exactly one UDP datagram, which is assembled into an IP datagram to be sent. UDP protocol only implements multiplexing, multiplexing and simple error detection.

The difference between:

  1. TCP is connection-oriented and UDP is connectionless

  2. TCP packet oriented and UDP byte stream oriented

  3. TCP provides reliable transmission services (data sequence and correctness), while UDP does not

  4. TCP has a low transmission speed, and UDP has a high transmission speed

  5. TCP requires more system resources (with a high header overhead), while UDP requires less

  6. TCP provides a congestion control mechanism, so that the transmission rate and time are not completely controlled by applications. UDP does not provide congestion control, so that applications can better control the transmission time and rate

337. How can reliable data transfer be implemented over UDP?

Adding reliability mechanism and applying specific error recovery mechanism in application layer increases the difficulty of application development.

38. Describe the format of UDP packet fields.

Header field + data field. UDP length: indicates the total length of the field header and field data. Check and: Checks whether errors occur in the UDP segment during transmission. In addition to UDP packet segment (header + data field), pseudo header is included in the verification sum.

Sender: UDP performs the reverse code operation on all 16-bit words in the packet segment. Any overflow encountered during the sum is rewound (rewinding is carried to the last bit). Recipient: Similarly, calculate the checksum and verify whether the calculation result is equal to the checksum. If the calculation result is not equal, an error is detected.

What is false head? What content does fake header have? Why add false headers when calculating checksum?

The pseudo-header is just a virtual data structure, extracted from the packet header of the IP for calculating the sum, not through the actual transmission. All false headers include source IP address, destination IP address, 0, protocol number, and UDP length. The purpose of the pseudo-header is to allow UDP to check twice whether the data has arrived at the destination correctly (not only the packet content is correct but also the host IP address is correct), to ensure that the destination IP address is local (once), and the operating system delivers the packet correctly to UDP (twice).

40. What is the Reliable Transport Protocol (RDT)? How many common reliable transport protocols are there and their potential drawbacks?

Rdt1.0: Reliable data transmission over reliable channels RDT2.0: data transmission with bit-error only channels (error detection, ACK/NAK mechanism, retransmission mechanism) RDT2.1: On the basis of RDT2.0, consider the ACK/NAK error case (the sequence number is introduced, and the receiver determines whether the packet is a new or retransmitted packet according to the sequence number) rdT2.2: Consider the ACK error case without NAK (the receiver tells the sender the last received packet number) RDT3.0: Channel data transmission with bit errors and packet packet loss (reasonable wait time set on the above basis)

41, why RDT stop protocol performance is not high, what is the solution?

Because the stop operation makes it necessary to wait for confirmation time for each packet to be sent out, one round of RTT, resulting in low efficiency. The RDT protocol of pipelined mechanism can be used for transmission, which does not run in the mode of stop, allowing the sender to send multiple packets without waiting for confirmation, so as to increase efficiency.

42. What should be paid attention to in a reliable data transfer protocol with pipelining mechanism?

1. The sequence number range needs to be increased. 2. Need appropriate error recovery mechanism: Sliding window protocol (GBD (N step back), SR (select retransmission))

43. Brief the rollback N step (GBN) protocol.

In THE GBN protocol, after the sender sends a data frame, it sends several data frames in succession. Even after receiving the reply frame from the receiver in the continuous transmission process, the sender can continue to send data frames. And the sender must set the timeout timer (the same timer) after each data frame is sent. As long as the confirmation frame is not received within the specified timeout period, the corresponding data frame is resend. For example, after the sender sends N frames, if the previous frame of the N frame does not return its confirmation information after the timer expires, the frame will be judged as an error or lost, and the sender has to re-send the error frame and the subsequent N frame. The sender uses only one timer, which can be regarded as the timer used by the earliest sent but unacknowledged packet. If an ACK is received but there are still packets that have been sent but not acknowledged, the timer is restarted. If no packet has been sent but not acknowledged, stop the timer. Accept frames Only frame ACK(n) is allowed to be accepted sequentially. To reduce overhead, cumulative acknowledgement allows the receiver to send acknowledgement only to the last data frame after receiving several correct frames in a row, or to incidental acknowledgement of previously correctly received frames only when it has data to send. That is, acknowledgement of a data frame indicates that the data frame and all previous data frames have been received correctly. (Cumulative confirmation) Back N frame protocol the acceptance window is 1, which ensures that data frames are accepted in sequence. If n bits are used for frame numbering, the size of the sending window should be 1 to 2^(n-1). If the size of the sending window is larger than 2^(n-1), the receiver will not be able to distinguish the new frame from the old frame. Disadvantages: lost frame error, not only the lost frame retransmission, but also the lost frame after the frame also have to retransmission, transmission efficiency is reduced.

44. Briefly describe the selective retransmission (SR) protocol.

In the SR protocol, if the receiver discovers a frame error, the correct frame sent later cannot be sent to the upper level of the receiver immediately. However, the receiver can collect the frame and store it in a buffer. At the same time, the sender is required to re-transmit the wrong frame. Once the retransmitted frame is received, the remaining frames that were already in the buffer can be sent to the higher level in the correct order. Obviously, SR reduces waste, but requires the recipient to have a large enough buffer space.

Send window size + receive window size <=2^(n).

45. Compare three protocols, such as stop, GBN and SR, from the perspective of sliding window.

If we look at the stop, back and select retransmission protocols from the sliding window point of view, they differ only in the size of their respective window sizes. Stop: send window =1, receive window =1; Back n Protocol: send window >1, receive window =1; Select retransmission protocol: Send window >1, accept window >1;

46. Describe the format of TCP packet fields.

Serial number: the byte stream number of the first byte of the packet, not the segment number.

Acknowledgement number: The acknowledgement number that host A fills into the packet segment is the sequence number of the next byte that host A expects to receive from host B. Accumulative confirmation is also used. (This is similar to GBN)

Flag field: 8 bits, representing different meanings.

Receive Window: This field is used for flow control and is used to indicate the number of bytes the receiver is willing to accept.

Emergency data pointer: Indicates the end point of the emergency data in the packet segment.

Option field: This field is used when the sender negotiates the maximum packet segment length (MSS) with the receiver.

47. How does a TCP receiver handle an out-of-order packet segment?

It’s not in the specification, it’s up to the application developer to decide.

In what ways does TCP ensure the reliability of data transmission?

1. The application data is divided into data blocks that TCP considers most suitable for sending. 2. Acknowledgement mechanism. After a packet is sent, it waits for confirmation. 3. Resend mechanism. If no acknowledgement is received, the data segment will be resend. 4. Keep the checksum of its head and data. Verify the accuracy of data. 5. Sort, discard duplicate, and control flow.

49. Briefly introduce the principle of TCP three-way handshake.

1. First handshake: When a connection is established, the client sends a SYN packet (SYN = X) to the server. The client enters the SYN_SEND state and waits for confirmation from the server.

2. Second handshake: When the server receives a SYN packet, it must acknowledge the client’s SYN (ACK = X +1) and send a SYN (ACK = Y) packet. In this case, the server enters the SYN_RECV state and agrees to establish a connection. Reply to the client with an ACK. In full-duplex TCP transmission, the server sends a SYN request to the client to establish a connection between the server and the client. In this case, the server enters the SYN_RECV state.

3. The third handshake: After receiving a SYN+ACK packet from the server, the client sends an ACK(ACK = Y +1) acknowledgement packet to the server. The connection is established at ESTAB LISHED and a three-way handshake is complete. The connection status on the client changes to ESTABLISHED. The client sends an ACK response to the SERVER in response to the SYN request from the server. After the server receives the ACK response from the client, the connection status on the server changes to ESTABLISHED. At this point, the establishment of the connection is completed, both parties can carry out data transmission at any time). The TCP three-way handshake is used to establish a two-way connection.

50. Why three handshakes, not two or four?

First of all, it needs to be clear that TCP handshake establishes a connection. The core idea is to obtain the Sequence Number of data origin of both parties on the premise of ensuring reliable data transmission and improving transmission efficiency. To initialize the initialization value of the Sequence Number. The communication parties should inform each other of their initialized Sequence Number values. That is, seq=x and SEq =y. The seq value must be used as the serial number of the data communication in the future to ensure that the data received by the application layer is not out of order due to transmission problems on the network. That is, Tcp uses this serial number to concatenate data. Therefore, after the server sends back its Sequence Number, that is, the second handshake, the client needs to send an acknowledgement packet to the server to inform the server that the client has received the initialization Sequence Number value from the server.

Do not use the four-way handshake cause: The server can combine the ACK response to the SYN packet sent by the server to the client with the SYN packet sent by the server to improve transmission efficiency. Cause: The server and the client may not agree on the initial sequence value of the server. TCP does not retransmit for ACK with no data!

SYN timed out for the first handshake. Problem cause analysis.

When the Server replies with a SYN-ACK packet from the Client, if the Client is disconnected, the Server does not receive an ACK packet from the Client. The connection is in an intermediate state, that is, failed or failed to connect. In Linux, the Server retries five times by default. The retry interval starts from 1 second and doubles each time. The TCP connection is disconnected for 63 seconds. In this case, the server may be exposed to SYN Flood attacks. Defense measures against SYN Flood attack risks. For example, when a malicious program sends a SYN packet to the server, it goes offline, and the server waits 63 seconds to disconnect by default. The attacker exhauates the server’s SYN queue, preventing normal connection requests from being processed. Linux uses the tcp_syncookies parameter to send back SYN cookies when the SYN queue is full. TCP sends a SYN Cookie to the client based on the source address port, destination port, and time stamp. If the attacker does not respond, the Client sends a SYN Cookie back to the server to establish a connection. A connection created using the SYN Cookie can be created even if the SYN queue is now full and the connection request is not in the queue. What should I do if the Client fails after the connection is established? In fact, TCP is set up to keep alive time. During this period, the connection is inactive and the end with the keepalive function enabled sends keepalive probe packets to the other end. The sender sends an keepalive probe packet to the peer. If the sender does not receive any response packet, the sender continues to send the probe if the keepalive time is set in advance. The connection is interrupted if no response is received until the number of attempts reaches the number of keepalive probes. The other party will be confirmed as unreachable and the connection will be interrupted.

A brief introduction to the principle of TCP four wave.

1. First wave: The Client sends a FIN to stop data transmission from the Client to the Server. The Client enters the FIN_WAIT_1 state, that is, terminates wait 1 state. Both the client and server are in the ESTABLISHED state. The client sends a request to close the connection first. The client sends a FIN packet to the server, indicating that the client has no data to send. The client enters the FIN_WAIT_1 state. .

2. Second wave: After receiving the FIN packet, the Server sends an ACK to the Client with the receiving sequence number +1 (the same as SYN, one FIN occupies the same sequence number). The Server enters CLOSE_WAIT state, which means the wait state is closed. (After receiving the FIN packet, the Server sends an ACK to the Client. Then the server enters the CLOSE_WAIT state. At this time, the server is in the half-closed state, because the client will not send data to the server, the server may send data to the client, when the server is finished sending data).

3. Third wave: The Server sends a FIN packet to disable data transfer from the Server to the Client. The Server enters the LAST_ACK state, which indicates that the Server sends a FIN packet to the Client. The server enters the LAST_ACK state and waits for the reply to close the connection. .

4. Fourth wave: After receiving the FIN, the Client enters the TIME_WAIT state and sends an ACK to the Server. The ACK sequence number is +1. The Server enters the CLOSED state. After four waves are completed (that is, after receiving the FIN from the server, the client replies with ACK and enters the TIME_WAIT state. Remember that in TIME_WAIT state, it needs to wait for twice the maximum MSL packet segment lifetime to ensure reliable connection closure. After that, the client enters the CLOSE state. After receiving the ACK message from the client, the server enters the close state. .

53, Why is there a TIME_WAIT state?

Time-wait Indicates that the WAIT state is closed to CLOSE with a timeout setting. The timeout setting is 2 times MSL. Time-wait mode is used to ensure that there is enough TIME for the other party to receive an ACK. If either party does not receive an ACK, it will trigger the other party to resend the FIN packet. It also avoids confusion between old and new connections and does not allow one connection to get mixed up with the next.

54. Why do you need three handshakes to establish a connection and four waves to disconnect a connection?

Full duplex allows data to be transmitted in both directions simultaneously. That is, the server can send data to the client at the same time, and the client can send data to the server at the same time. Because TCP is full-duplex, both the sender and the receiver need FIN packets and ACK packets. In other words, the sender and the receiver need to wave twice each, but one of them is passive. Therefore, it looks like four waves. The connection is established or disconnected in both directions, but the SYN and ACK packets on the server are combined into one packet. When the connection is disconnected, the stop time for sending data in the two directions may be different. Therefore, FIN and ACK packets cannot be merged. FIN and ACK packets are sent separately. That’s why you have to shake hands three times to establish a connection and wave your hand four times to disconnect.

55. Describe the process of simultaneously opening TCP.

The status changes on both ends are CLOSED->SYN_SENT->SYN_RCVD->ESTABLISHED.

56. Describe the process of TCP simultaneous shutdown.

The state changes at both ends are ESTABLISHED->FIN_WAIT_1->CLOSING->TIME_WAIT->CLOSED.

57. Describe the process of TCP half-open and half-closed.

Half open:

A half-open TCP connection means that one end of the TCP connection crashes abnormally or the connection is closed without informing the peer end. In this case, data cannot be sent or received normally. Otherwise, RST is generated (RST is described later). For example, a common situation is that one end of the TCP connection is abnormally powered off, resulting in a half-open TCP connection. If no data is transmitted, the peer end does not know about the local exception and remains in the ESTABLISHED state. (TCP has a survival detection mechanism, which we will describe later.) In addition, from the perspective of Linux implementation, because Linux has a half-connection queue, TCP half-open connection refers to the state of sending A TCP connection request and waiting for the reply of the other party. At this time, the connection is not fully established and the two parties cannot communicate and interact, so it is called half-connection. Since a complete TCP connection requires three handshakes to complete, the connection before three handshakes is called a half-open connection.

A half closed:

In a TCP half-closed connection, only one end of a TCP connection sends a FIN packet, but the other end does not send a FIN packet. Data can still be sent in one direction. This is not a common scenario. Berkeley Sockets API calls shutdown() into a semi-closed state (a regular close() call is expected to complete a bi-directional closing of the TCP connection). Shutdown () is quite an instruction. The local end has no data to send, so I send a FIN to the peer end, but I still want to receive data from the peer end until the peer end sends a FIN instruction to close the connection. As shown in the following figure, the data transmission scenario marked with a red background text box is a TCP half-closed connection.

58. RTT round-trip time estimation.

Definition of RTT: the SampleRTT of a packet segment (SampleRTT) is the amount of time between the sending of a packet segment (that is, the delivery of the packet to the IP address) and the acknowledgement of the packet segment received. Most TCP implementations do a SampleRTT measurement only once at a time, rather than measuring a SampleRTT for each segment sent. In addition, TCP also maintains EstimatedRTT and DevRTT parameters, according to which the retransmission interval is set. EstimatedRTT: EstimatedRTT = (1-A) • EstimatedRTT + A • SampleRTT RTT deviation: DevRTT = (1 – c), DevRTT + c, | SampleRTT – EstimatedRTT |

59. Set the TCP timer timeout interval.

Timeout retransmission timer interval setting: You are advised to set the initial value to 1s Timeoutinterval = EstinMrtedRTT +4 • DevRTT In practice, TCP traffic and congestion control need to be considered during the dynamic change of Timeoutinterval. Double the timeout interval: During each TCP retransmission, the timeout interval is set to twice the previous value from EstimatedRTT rather than DevRTT and EstinmatedRTT, which is mainly used to avoid the risk of network congestion.

Is TCP a GBN protocol or an SR protocol?

No, it’s more like a combination. TCP acknowledgments are cumulative. Correctly received but out of order segments are not acknowledged one by one by the recipient. This is similar to GBN** (GBN ack= confirmed sequence number, TCP ACK = expected next sequence number (1 difference)). Therefore, the sender only needs to maintain the minimum ordinal number of bytes that have been sent but not acknowledged and the ordinal number of the next byte to be sent. However, TCP caches properly received but out of order segments **, similar to SR. TCP’s selective acknowledgement mechanism, which allows the TCP receiver to selectively acknowledge out-of-order segments rather than cumulatively acknowledge the last properly received ordered segment. When combined with the selective retransmission mechanism (i.e., the hop retransmission of segments that have been selectively acknowledged by the recipient), TCP looks a lot like SR. TCP maintains a timer per packet segment for timeout retransmission, similar to SR.

Here’s an example:

61. What are the four timers used for TCP connections? What does it do?

1. Timeout retransmission timer (60s) : When TCP sends a packet segment, it creates a retransmission timer for the specific packet segment. 2. Continue timer (60 s) : used for flow control, at the receiving end to send and receive window size = 0 after the message, the sender after the last timer expires, send inquire message to test receive window size, in order to avoid the receiver send > 0 when the window of the message to the sender packet loss, caused by the two sides stalemate waiting a deadlock situation. 3. Keepalive timer (2h) : Keepalive timer is used to prevent long idle time between two TCP connections. 4. TIME WAIT timer (2MSL) : When the client enters the time-wait state, the connection is not released. The client can close the connection only after waiting for twice the maximum lifetime of the packet segment. The link remains in a transitional state during the time wait.

62. Briefly describe the TCP traffic control mechanism.

The so-called flow control is to make the sending rate of the sender not too fast, so that the receiver can receive in time. During communication, the rate of the sender is not necessarily the same as that of the receiver. If the rate of the sender is too high, the receiver cannot process the data. In this case, the receiver can only store the unprocessed data in the cache (the out-of-order data packets are also stored in the cache). If the cache is full and the sender is still frantically sending data, the receiver can only throw away the received packets. A large number of packet loss will greatly waste network resources. Using the sliding window mechanism, it is very convenient to control the flow of the sender on the TCP connection. So how do you do flow control? Each time the receiver receives a packet, it can tell the sender how much of its cache is free when sending a certain packet. We also call the remaining size of cache the size of the receiving window. After receiving the packet, the sender adjusts the sending rate, that is, the size of the sending window. When the size of the receiving window is 0, the sender stops sending data to prevent massive packet loss. When does the sender continue to send data? When the sender receives the acceptance window win = 0, then the sender to stop sending message, and at the same time open a timer (continuous timer), every once in a while can send a test message to ask the receiver, inquire whether can continue to send data, if you can, the receiver will tell him to accept the size of the window at this time; If the accept window size is still 0, the sender refreshes the start timer again. How to set the window size?

63, What is AIMD?

AIMD (Increasing addition, decreasing multiplication law)Is the congestion control law used by TCP. This is not entirely fair, however, because by resizing the window according to the round-trip time, connections closer to the host get more bandwidth than connections farther away.

64. Describe the TCP congestion control mechanism.

Congestion control prevents too much data from being injected into the network so that routers or links in the network do not become overloaded. There are two versions of THE TCP congestion control mechanism: TCP Tahoe and TCP Reno. The difference between the two is whether the quick recovery mechanism is used.

Slow start:

When a new connection is created, the CWND initializes to a maximum packet segment (MSS) size, and the sender starts sending data at the congestion window size. Each time a packet segment is acknowledged, the CWND increases the MSS size by one. Thus the value of CWND increases exponentially with network round trip time (RTT). In fact, slow start is not slow at all, it just starts from a lower base.

Congestion avoidance:

As you can see from the slow start, CWND can grow quickly to maximize the use of network bandwidth resources, but CWND cannot continue to grow indefinitely, there must be some limit. TCP uses a variable called slow start threshold (SSthRESH). When CWND exceeds this value, the slow start process ends and the congestion avoidance phase enters. For most TCP implementations, the ssthRESH value is 65536(again in bytes). The main idea of congestion avoidance is the increase of addition, that is, the value of CWND no longer increases exponentially and the addition begins to increase. At this time, when all the message segments in the window are confirmed, the size of CWND increases by 1, and the value of CWND begins to increase linearly with RTT. In this way, network congestion can be avoided due to excessive growth and the optimal value of the network can be gradually increased and adjusted.

Fast retransmission:

TCP has a timer for each packet segment, called the Retransmission timer (RTO). When the RTO times out and no data is confirmed, TCP retransmits the packet segment. In this case, the sender considers that the network is congested. The premise of fast retransmission requires the receiver to send repeated confirmation immediately after receiving an out-of-order packet segment (in order to make the sender know that the packet segment did not reach the other party in time) rather than wait until the sender sends data with random confirmation. Fast retransmission can improve the network throughput. TCP sends ACK packets immediately when receiving out-of-order packets. TCP uses three identical ACKS to determine the loss of packets (repeated acknowledgement) without waiting until the timer expires. In this case, the sender considers that the network is congested.

Quick recovery:

The sender now assumes that the network may not be congested, considering that it would not receive multiple duplicate acknowledgements if the network were congested. Tahoe: After you think the network is congested. Perform the following operations :(1) set the slow start threshold to half the size of the current congestion window, (2) set the congestion window to 1, and (3) enter the slow start state again. Reno: After the network is considered congested. Perform the following operations :(1) set the slow start threshold to half the size of the current congestion window, (2) set the congestion window to the modified slow start threshold, and (3) enter the congestion avoidance state.

Schematic diagram of TCP Tahoe version Schematic diagram of TCP Reno version

65. Describe the similarities and differences between flow control (sliding window RWND) and congestion control (congestion window CWND).

Same: The phenomenon of both is packet loss, the implementation mechanism is to make the sender send slower, send less. Different: Congestion control must be done on the premise that the network can withstand the current network load. Flow control usually refers to the control of point-to-point traffic, which is an end-to-end problem. Flow control is to control the rate at which the sender sends data so that the receiver can receive it in time. Packet loss for congestion control occurs on the router, and packet loss for traffic control occurs on the receiver.

The sliding window generally refers to the receiving window, the receiver window is the buffer from which the receiver can get packets for flow control, and the congestion window is used for congestion control. TCP real send window =min(RWND, CWND).

The network layer

Is the network layer end-to-end?

Isn’t. Each host and router needs to run the network layer protocol, not just on the end system/host.

67. Briefly describe three common data exchange modes and their relations and differences in computer networks.

Circuit switching

Circuit switching requires the establishment of a dedicated physical communication link, which may contain many intermediate nodes. This physical communication link is exclusively used during the communication. Resources are released only after the communication ends, resulting in low resource utilization. Circuit switching is suitable for large amount of data transmission which requires high real-time performance.

Message switching

Packet switching A packet is used as the data transmission unit, carrying information such as the active address and destination address. In store-and-forward mode, there is no limit on the size of packets. Network nodes must have sufficient cache space.

Packet switching

Packet switching divides a large block of data into smaller groups and adds information such as source address, destination address, and group number. usingStore and forwardFormal transmission, packet size relative to packet size relatively fixed,The cache space is easy to manageBut there is a possibilityDisorderly out-of-orderAnd group headsCost increaseIn the case. Packet switching can be further divided intoDatagram interchangeandVirtual circuit switching.

68. Briefly describe the definitions and differences of two types of packet switching – datagram switching and virtual circuit switching.

First of all, it should be made clear that packet switching is not the same as connectionless. These are two different concepts. Packet switching provides connection-oriented virtual circuit switching and connectionless packet switching. Datagram interchange

Datagram exchange Data packet exchange superficial point is on the basis of packet exchange, added the packet mechanism.

Virtual circuit switching

The superficial point of virtual circuit switching is that the packet mechanism is added on the basis of circuit switching. However, circuit switching means that a connection-oriented service requests a physical circuit connection and uses it exclusivelyPhysical circuit connections (can be multiplexed). The virtual circuit is implemented on a physical circuitreuse, is aLogical circuitGrouping,Monopolize the logical link (without multiplexing, unlike circuit switching)Follow this logical linkStore and forward (for maintaining virtual circuit forwarding tables)Many virtual circuits can be established on a physical connection.

What is the core function of the network layer?

Forwarding and routing. Forwarding: Forwards packets from the input port of a router to the appropriate output port. Routing: Determines the path of packets from the source end to the destination end

Are connection-oriented and connectionless the same at the network layer and transport layer?

The connection-oriented and connectionless services at the network layer refer to the connection services between hosts, while the connection-oriented and connectionless services at the transport layer refer to the connection services between application processes (end-to-end), which is a substantial difference between the two.

What are the two protocols at the Internet network layer? What are they for?

IP and ICMP. IP protocol is responsible for the format of data packet, route addressing and other functions. ICMP is an Internet Control message protocol. It is responsible for transmitting control messages (error message transmission and network inquiry) between IP hosts and routers.

72. Briefly describe the IP packet format.

The version number:IPV4 is 4, IPV6 is 6. (4bits)

Length of head:The unit is 4 bytes. The minimum value is 5 and the maximum value is 5. Therefore, the minimum length is 20 bytes and the maximum length is 60 bytes. (4 bits)

Type of Service (TOS):In 1998, it was changed to differentiated service. It is valid only when the network provides differentiated service. IPv4 does not provide differentiated service, so the 8bits are usually all zeros. (8 bits)

The total length:Total number of bytes (header + data) in an IP packet. The maximum length of an IP packet is 65535 bytes. If the minimum header is subtracted, the maximum number of bytes that an IP packet can encapsulate is 65515 bytes. (16 bits)

Identification:This section describes how to identify an IP address group for AN IP fragment. Using the counter, increments the counter by 1 for each packet generated. (16 bits)

Mark:It is divided into three markers [Reserved, DF (Dont Fragment), MF(More Fragment)], DF is used to judge whether the Fragment is allowed (=1 forbidden, =0 allowed), MF is used to judge the position of the Fragment, that is, whether it is the last Fragment (=1 non-last Fragment, =0 last Fragment/not Fragment). (3 bits)

Slice offset: in 8 bytes, indicating that the length of each slice is a multiple of 8 bytes except for the last one. This parameter is used to determine the position of the IP fragment in the original group, that is, the relative offset. (13bits)

TTL:Indicates the number of routers that an IP packet can pass through on the network. For each router that an IP packet passes through, the TTL decreases by 1. If the TTL is 0, the router discards the packet. (8bits)

Protocol:Indicates which protocol packets IP packets encapsulate, TCP(6), UDP(17), ICMP. (8 bits)

Header inspection and:For error detection of packet head only, the calculation is to set the packet to all zeros, sum up 16 bits and take inverse code. The header sum is computed hop by hop, that is, at each router, because the TTL is always changing. (16 bits)

Source IP address:IP address of the host/router that sends packets. (32 bits)

Destination IP address:IP address of the host/router that receives packets. (32 bits)

73. What is IP sharding? Why IP sharding?

IP fragment is an IP packet. When forwarding packets to smaller MTU links, you can divide the packets into multiple fragments and reassemble the fragments after they reach the destination host. (here requires the IP header related field is used to identify the fragmentation and determine the relative order of fragmentation) is to consider the IP fragmentation, because each link of the maximum transmission unit (MTU), the size limit of the maximum transmission unit can through the link packet size, therefore in the group takes part of MTU of minor link, You need to think about IP shards.

74. Why is there a total length in the IP header?

Because some data links (Ethernet) need to fill in some data to get to the minimum length. Because the minimum length of an Ethernet frame is 46 bytes, but the IP length can be shorter, the total length is needed to determine the contents of the IP data section.

How is the IP header checksum calculated? What are the similarities and differences between the IP header checksum and ICMP, IGMP, TCP, UDP?

Calculation process: add 16bits and take the reverse (take the reverse and then sum the result is the same) similarity: same algorithm. Differences: IP header check and include only header fields, not data parts.

76. Briefly describe IPv4 addressing.

An IP address does not point to a host, but to a network interface, because an IP address is associated with each interface. Therefore, each router has multiple interfaces and therefore multiple IP addresses. IP subnet: with the same network device interface, and network equipment across a router and other third party can be each other’s physical unicom interface IP subnet (32 bits) : network number + subnet number + host number (are not fixed length, so need to make sure that the network number (prefix) length subnet mask: used to determine the length of the network number + subnet number. For example, 255.255.255.0 (class C network) indicates that the network number and subnet number are 24bits in length. By destination IP address and subnet maskBitwise andYou can get the address of the subnet.

What is CIDR? Route aggregation? Longest prefix matches first?

Classless InterDomain Routing (CIDR) : CIDR does not use ABC address division. The network number + subnet number = network prefix (which can be of any length) is combined with the subnet address and subnet mask to facilitate subnet division and identification. For example, 192.168.0.104/24. Why CIDR?

CIDR solves the problem of IP address waste caused by traditional classification by class and enables efficient routing (reducing the size of routing tables) through route aggregation. Routing aggregation

Route aggregation is to combine multiple address blocks with small prefixes into one address block with large prefixes. The process of route aggregation is called route aggregation. Route aggregation by class is also possible, but the idea of route aggregation was introduced with CIDR. Longest prefix matches take precedence

Due to the history of the Internet, in the case of implementing routing aggregation, there may be a big prefix address block under small prefix block points belong to different networks, if use routing aggregation, can cause to small prefix address block group could not be delivered the address area, so I need a router have corresponding mechanism to judge this kind of situation, This leads to the longest prefix matching priority principle, that is, when determining which network to forward data packets to, if two options conflict, the router forwards packets according to the longest prefix matching in the routing table.

78. Briefly describe NAT technology.

NAT: network address translation. The basic idea of NAT is that isPs assign a public IP address to each home or business to transport Internet traffic. On the customer network, each host has a unique IP address and can route internal traffic. When a data packet leaves the customer’s network and is sent to the Internet, the IP address on the network must be translated into a public IP address (you can distinguish the data packets from different internal devices by the port number).

79. Describe the ICMP function, packet type, packet format, and how to encapsulate ICMP packets into IP packets.

The ICMP function

Supports the transmission of error packets and network inquiry between hosts and routers.

Two types of ICMP Packets (5+2)

Error report message

Purpose inaccessible: when there is a packet could not be served the host network, the purposes of source, destination port, and so on and so forth inhibition: when a router cache is full, send the host sends the packet to reduce the rate of congestion control/overtime are: the most common situation is TTL extended, packet discarding the parameters reach maximum hop count is the router problem: Redirection when the router thinks there is a problem with packet parameters: When the router thinks a packet has been incorrectly routed

Network query packet

Echo request and reply packets: checks whether a network is reachable. Ping Timestamp Request and reply packets: similar to the preceding, contain the arrival time of the request message and the sending time of the reply message.

Message format

Packaging sample

Data link layer

Quiet and Quick! Salute!