preface

Gold nine silver ten is coming, sorted out 50 computer network interview questions, each one is very classic, dacang is also very like to ask. I hope you can find the ideal offer after reading it

  • Public account: pick up the little boy snail

1. What are the common HTTP request methods, their differences and uses?

  • GET: Sends a request to obtain server data
  • POST: submits data to the resource specified by the URL
  • PUT: Submits data to the server for modification
  • HEAD: Indicates the HEAD of the request page to obtain meta information about the resource
  • DELETE: Deletes some resources on the server.
  • CONNECT: Establishes a connection tunnel for the proxy server.
  • OPTIONS: Lists the methods of requests that can be made to a resource, often used across domains
  • TRACE: TRACE the transmission path of the request-response

2. What are the common HTTP status codes?

  • 1XX: Accepted request being processed (informational status code)
  • 2xx: indicates that the request is successfully processed (success status code).
  • 3xx: indicates the redirection status and needs to be rerequested (redirection status code).
  • 4XX: Server unable to process request (client error status code)
  • 5XX: Server processing request error (server error status code)

Common status codes are as follows:

  • Switch the request protocol from HTTP to WebSocket
  • 200 If the request is successful, the message is returned.
  • 301 permanent redirect, will be cached
  • 302 Temporary redirection, not cached
  • 400 Request error
  • 403 The server is inaccessible
  • 404 No resource matching URI was found.
  • 500 Common server side errors

3. Enter the URL from the browser address bar to display the home page

  1. DNS lookup to find the real IP address
  2. The TCP connection was established with the server. Procedure
  3. Sending an HTTP request
  4. The server processes the request and returns an HTTP packet
  5. The browser parses the rendered page
  6. Connect the end of the

4. How do I understand that HTTP is stateless

Each HTTP request is independent and irrelevant, and by default there is no need to save context information. Let’s look at an easy to understand example:

Stateful:

  • A: What do we eat today?
  • B: Tilapia!
  • A: How does it taste?
  • B: Not bad. It smells good.

Stateless:

  • A: What do we eat today?
  • B: Tilapia!
  • A: How does it taste?
  • B:? Ah? What? What the hell? How does it taste?

Add a cookie:

  • A: What do we eat today?
  • B: Tilapia
  • A: How was your tilapia today?
  • B: Not bad. It smells good.

5. Differences between HTTP 1.0, 1.1, and 2.0

The HTTP 1.0

  • HTTP 1.0 requires that the browser and the server only stay togetherTransient connection, the browser needs to establish a TCP connection with the server for each request, and the server disconnects the TCP connection immediately after completing the request processing. It can also force long links to be turned on, such as SettingsConnection: keep-aliveThis field

The HTTP 1.1

  • Persistent connections were introduced, meaning that TCP connections are not closed by default and can be reused by multiple requests.
  • Pipelining, which allows clients to send multiple requests simultaneously within the same TCP connection, has been introduced.
  • Cache handling, which introduces more cache control policies such asCache-Control,Etag/If-None-MatchAnd so on.
  • Error status management: 24 error status response codes are added. For example, 409 indicates that the requested resource conflicts with the current status of the resource.

HTTP 2

  • Multiplexing is used, that is, within a single connection, both the client and the browser can send multiple requests or responses at the same time, and they do not have to be sequentially matched.
  • Server push: HTTP 2 allows the server to proactively send resources to the client without a request

6. Talk about computer network architecture

The computer network architecture mainly includes ISO seven – layer model, TCP/IP four – layer model and five – layer architecture

ISO seven-layer model

The ISO seven-layer model is a standard system developed by the International Organization for Standardization (ISO) for interconnecting computer or communication systems.

  • Application layer: an interface between network services and end users. The protocols are HTTP, FTP, TFTP, SMTP, SNMP, DNS, TELNET, HTTPS, POP3, DHCP
  • Presentation layer: data representation, security, compression.
  • Session layer: establishes, manages, and terminates sessions. The host process refers to the ongoing session between a local host and a remote host
  • Transport layer: Defines the protocol port numbers for transferring data, as well as flow control and error verification. Protocols are: TCP UDP, once the data packets leave the network interface card will enter the network transport layer
  • Network layer: Carries on the logical address, realizes the path selection between different networks. The protocols are: ICMP IGMP IP (IPV4 IPV6)
  • Data link layer: establishes the logical connection, carries on the hardware address addressing, the error check and so on the function.
  • Physical layer: establishes, maintains, and disconnects physical connections.

TCP/IP four-layer model

  • Application layer: corresponding to the OSI reference model (application layer, presentation layer, and session layer), it provides users with required services, such as FTP, Telnet, DNS, and SMTP
  • Transport layer: corresponds to the OSI transport layer. It provides end-to-end communication functions for application-layer entities and ensures sequential transmission of packets and data integrity. Defines the TCP and UDP layer protocols.
  • Network layer: the network layer corresponding to the OSI reference model. It mainly solves host-to-host communication problems. Three main protocols: Internet Protocol (IP), Internet Group Management Protocol (IGMP), and Internet Control Message Protocol (ICMP)
  • Network interface layer: corresponds to the data link layer and physical layer of the OSI reference model. It is responsible for monitoring the data exchange between the host and the network.

Five-tier architecture

  • Application layer: Through the interaction between application processes to complete specific network applications. Corresponding to the OSI reference model (application layer, presentation layer, session layer), there are many application layer protocols, such as domain name system (DNS), HTTP protocol, email support SMTP protocol, and so on. We call the unit of data that the application layer interacts with the message.
  • Transport layer: responsible for providing common data transfer services for communication between two host processes. The transport layer of the OSI reference model includes transport control protocol TCP and user data protocol UDP.
  • Network layer: the network layer corresponding to the OSI reference model
  • Data link layer: the data link layer corresponding to the OSI reference model
  • Physical layer: the physical layer corresponding to the OSI reference model. The unit of data transmitted at the physical layer is the bit. The role of the physical layer is to realize the transparent transmission of bit streams between adjacent computer nodes, shielding the differences between specific transmission media and physical devices as far as possible.

7. What are the differences between POST and GET?

  • Request parameters: GET includes the parameters in the URL, concatenated with &; POST passes a parameter through the request body.
  • Request caching: GET requests are actively cached, while POST requests are not unless manually set.
  • Bookmark: GET requests support bookmark, POST requests do not.
  • Security: POST is safer than GET. GET requests are harmless when the browser rolls back, while POST requests again.
  • History: GET request parameters are fully preserved in the browsing history, while POST parameters are not.
  • Encoding mode: A GET request can only be encoded by the URL, while a POST request supports multiple encoding modes.
  • Parameter data types: GET accepts only ASCII characters, while POST has no restrictions on data types.
  • Packet: GET generates a TCP packet. POST may generate two TCP packets.

8. In the process of interaction, if the data transmission is finished, but do not want to disconnect, how to maintain?

In HTTP, the Connection field of the response body is specified as keep-alive

9. How does HTTP implement persistent connections? At what point does the time run out?

How does HTTP implement persistent connections?

  • HTTP is divided into long and short connections, which are essentially TCP long and short connections. TCP connection is a two-way channel that can be kept open for a period of time, hence the term “long connection” and “short connection”.
  • Persistent connections refer to TCP connections, not HTTP connections.
  • TCP long connections can reuse a TCP connection to make multiple HTTP requests, which can reduce resource consumption, such as one request for HTML, short connections may also need to request subsequent JS/CSS/ images, etc

To implement a persistent HTTP Connection, set the Connection to keep-alive in the response header. HTTP1.1 defaults to persistent connections, and HTTP 1.0 also supports persistent connections, but it is disabled by default.

When does it run out of time?

  • HTTP usually has the HTTPD daemon, which can set keep-alive timeout to close a TCP connection if it is idle for longer than this time. It can also set a timeout in the HTTP header
  • TCP keep-alive contains three parameters that can be set in net.ipv4 of the system kernel: When tcp_KeepALIve_time is unused after the TCP connection, the probe packet will occur. If no ACK is received, the probe packet will be sent again every time until tcp_KeepAlive_probes are sent. The connection is discarded.
  1. tcp_keepalive_intvl = 15
  2. tcp_keepalive_probes = 5
  3. tcp_keepalive_time = 1800

10. What are the differences between HTTP and HTTPS?

HTTP is a Protocol for data Transfer based on TCP/IP communication Protocol. HTTP has the following problems:

  • The request information is transmitted in plain text, which is easy to intercept by eavesdropping.
  • Data integrity is not verified and is easy to be tampered with
  • If the identity of the other party is not verified, there is a danger of impersonation

HTTPS is designed to solve the problem of HTTP. HyperText Transfer Protocol Over Secure Socket Layer (HTTPS) is HTTP wrapped in Secure Socket Layer (SSL). HTTPS protocol = HTTP+SSL/TLS. An SSL certificate is used to authenticate the identity of the server and encrypt the data transmitted between the browser and the server.

Their main differences:

  • Data encryption or not: Http is in plaintext and HTTPS is in ciphertext
  • Default port: The default Http port is 80 and the default Https port is 443
  • Resource consumption: Compared to HTTP communication, Https communication consumes more CPU and memory resources because of encryption and decryption processing.
  • Security: HTTP is not secure. HTTPS is more secure.

11. What is the Https process?

  • HTTPS = HTTP + SSL/TLS, that is, SSL/TLS for encryption and decryption of data, HTTP for transmission.
  • SSL, or Secure Sockets Layer protocol, is a security protocol that provides security and data integrity for network communication.
  • TLS, or Transport Layer Security, is a later version of SSL 3.0.

  1. The user enters an HTTPS url in the browser and connects to port 443 of the server.
  2. The server must have a set of digital certificates. You can create the certificates yourself or apply for them from the organization. The difference is that the certificates issued by the server must be verified by the client. The certificate is essentially a pair of public and private keys.
  3. The server sends its own digital certificate (including the public key) to the client.
  4. After receiving the digital certificate from the server, the client checks the certificate. If the certificate fails to pass, a warning dialog box is displayed. If the certificate is fine, a key (symmetric encryption) is generated and encrypted using the certificate’s public key.
  5. The client initiates a second HTTP request in HTTPS, sending the encrypted client key to the server.
  6. After receiving the ciphertext sent by the client, the server uses its own private key to decrypt the ciphertext asymmetrically. After decrypting the ciphertext, the server obtains the client key, and then uses the client key to symmetrically encrypt the returned data, so that the data becomes ciphertext.
  7. The server returns the encrypted ciphertext to the client.
  8. After receiving the ciphertext sent by the server, the client uses its own key (client key) to decrypt the ciphertext symmetrically to obtain the data returned by the server.

12. What’s the difference between symmetric encryption and asymmetric encryption

Symmetric encryption: Encryption algorithm that uses the same key for encryption and decryption.

Asymmetric encryption: An asymmetric encryption algorithm requires two keys (public key and private key). The public key and private key exist in pairs. If the public key is used to encrypt data, only the corresponding private key can be decrypted.

13. What are XSS attacks and how to avoid them?

XSS attack, which stands for Cross-site Scripting, can be confused with the acronym for Cascading Style Sheets (CSS), so some people abbreviate XSS for CROSS-site Scripting. It refers to a malicious attacker inserting malicious HTML code into a Web page. When users browse the page, the HTML code embedded in the Web will be executed, so as to achieve the special purpose of malicious attack on users. XSS attacks generally fall into three types: storage XSS, reflection XSS, and DOM XSS

How does XSS attack?

Take reflective, for example. Here’s a flowchart:

How to solve the XSS attack problem

  • Do not trust user input, filter input, filter labels, etc., only allow valid values.
  • HTML escaping
  • For link jumps, such as <a href="xxx"To verify content, prohibit illegal links that start with script.
  • Limit the input length and so on

14. Please introduce the TCP three-way handshake mechanism in detail

Start Both the client and server are in the CLOSED state. Then the server starts listening to a port and enters the LISTEN state

  • After the first handshake (SYN=1, seq=x), the client enters the SYN_SEND state
  • After the second handshake (SYN=1, ACK=1, SEq =y, ACKnum=x+1), the server enters the SYN_RCV state.
  • After the third handshake (ACK=1, ACKnum=y+1), the client enters the ESTABLISHED state and the server receives the packet

15. Why is the TCP handshake three times, not two times? Can’t be four times?

Why is the TCP handshake three times? For the sake of understanding, let’s take a relationship as an example. The most important thing for two people to get together is to love each other. I love you, and I know that you love me, too.

Why can’t there be two handshakes?

If there are only two handshakes, a girl may not know if her “I love you too” message is received or not, and the relationship will not be happy.

Why can’t there be four handshakes?

Because you can’t shake hands four times? Because three times is enough. Three times is enough to let both sides know: you love me and I love you. Four times is too much.

16. TCP four wave process?

  1. After the first wave (FIN=1, seq= U) is sent, the client enters FIN_WAIT_1 state
  2. After the second wave (ACK=1, ACK= U +1,seq =v), the server enters CLOSE_WAIT state and the client enters FIN_WAIT_2 state after receiving the acknowledgement packet
  3. After the third wave (FIN=1, ACK1,seq=w, ACK = U +1), the server enters the LAST_ACK state and waits for the last ACK from the client.
  4. On the fourth wave (ACK=1, SEq =u+1, ACK= w+1), the client receives a shutdown request from the server, sends an acknowledgement packet, and enters the TIME_WAIT state for a fixed period of time (two maximum life cycles, 2MSL, 2MSL). 2 Maximum Segment Lifetime), if the server does not receive an ACK, the server considers that the connection has been CLOSED normally, so the server closes the connection and enters the CLOSED state. After receiving the acknowledgement packet, the server closes the connection and enters the CLOSED state.

17. Why does the client need to wait 2MSL before entering the CLOSED state during the four TCP waves

2MSL, 2 Maximum Segment Lifetime

  • One MSL ensures that the last ACK message from the active close party in the four waves can finally reach the peer end
  • One MSL ensures that if the peer end does not receive an ACK, the FIN packet to be retransmitted can be received

18. Why do you need four waves?

Let me give you an example

Xiao Ming and Xiao Hong talk on the phone, the call is about to end, Xiao Hong said “I have nothing to say”, Xiao Ming replied “I know”. But Xiao Ming may still have something to say, Xiao Hong can not ask Xiao Ming to follow her rhythm to end the call, so Xiao Ming may jabbering one more time, finally Xiao Ming said “I’ve finished”, Xiao Hong replied “I know”, so the call is over.

19. The difference between sessions and cookies.

Let’s first look at the definitions of Session and Cookie:

  • A Cookie is a small piece of text string data sent by the server to the user’s browser and stored locally in the browser. It will be carried to the server the next time the browser makes a request to the same server. Typically, it is used to tell the server whether two requests are coming from the same browser, to keep the user logged in, and so on. Cookies make it possible to record stable state information over the stateless HTTP protocol.
  • Session refers to the process of a session between the server and the client. Session uses cookies to process information. When the user first makes a request, the server creates a Cookie on the user’s browser. When the Session ends, it means that the Cookie has expired. The Session object stores the properties and configuration information required for a specific user Session.

What’s the difference between a Session and a Cookie?

  • The Cookie is stored on the client and the Session is stored on the server.
  • Cookie can only store ASCII data, while Session can store any data type. Generally, we can keep some common variable information in Session, such as UserId, etc.
  • The validity period is different. Cookie can be set to a long time. For example, the default login function that we often use, the Session is usually valid for a short time, and the Session will be invalid when the client closes or the Session times out.
  • The privacy policy is different. Cookies are stored on the client side and are easy to be obtained illegally. In the early days, some people stored the user’s login name and password in cookies, leading to information theft. Sessions are stored on the server and have better security than cookies.
  • The storage size varies. The data stored by a single Cookie cannot exceed 4K, while the data stored by a Session is much higher than that of a Cookie.

Here’s a picture:

  • When a user requests the server for the first time, the server creates a Session based on the information submitted by the user. When the request is returned, the server returns the unique SessionID of the Session to the browser. After receiving the SessionID returned by the server, the browser stores the SessionID in a Cookie. The Cookie records the domain name to which the SessionID belongs.
  • When the user accesses the server for the second time, the request automatically determines whether Cookie information exists in the domain name. If yes, the Cookie information is also automatically sent to the server. The server obtains the SessionID from the Cookie and searches for the corresponding Session information based on the SessionID. If no, the user has not logged in or the login is invalid. If the Session is found, the user has logged in and can perform the following operations.

20. How does TCP guarantee reliability

  • First, TCP connections are based on a three-way handshake, and disconnections are based on a four-way wave. Ensure the reliability of connections and disconnections.
  • Second, the reliability of TCP, also reflected in the state; TCP records which data is sent, which data is accepted, and which is not, and ensures that the packets arrive in order to ensure that the data is transmitted without error.
  • Third, the reliability of TCP is also reflected in the control. It includes packet verification, ACK response, timeout retransmission (sender), dissequence data retransmission (receiver), data loss, flow control (sliding window), congestion control and other mechanisms.

21. The difference between TCP and UDP

  1. TCP connection-oriented (e.g., to make a phone call, dial first to establish a connection); UDP is connectionless, meaning that a connection does not need to be established before sending data.
  2. TCP requires security and provides reliable services. Data transmitted through TCP connections is safe and reliable without loss or duplication. While UDP does its best to deliver, it does not guarantee reliable delivery.
  3. TCP is for point-to-point connections. UDP is for one-to-one, one-to-many, and many-to-many connections
  4. The transmission efficiency of TCP is relatively low, while the transmission efficiency of UDP is high. It is suitable for high-speed transmission and real-time communication or broadcast communication.
  5. TCP is suitable for web pages, mail, etc. UDP is suitable for video, voice broadcast, etc
  6. TCP is byte stream oriented and UDP is packet oriented

22. What are the fields in the TCP packet header

  • 16-bit port number: the source port number of the host from which the packet comes. The destination port number and the upper layer protocol or application to be passed
  • 32-bit serial number: The number of each byte in a byte stream in a transmission direction during a TCP communication (from the establishment of a TCP connection to the disconnection of a TCP connection).
  • 32-bit acknowledgment number: the response to the TCP packet segment sent by the other party. The value is the sequence number of the received TCP packet plus 1.
  • 4-bit header length: indicates the number of 32-bit words (4 bytes) in the TCP header. Because 4 bits can identify a maximum of 15, the TCP header has a maximum of 60 bytes.
  • 6 flag bits: URG(whether the emergency pointer is valid), ACk (whether the acknowledgement number is valid), PSH (the buffer is not full), RST (request to reconnect), SYN (establish the connection message), FIN (tell the local end to close the connection)
  • 16-bit window size: a means of TCP traffic control. By window, I mean the window that receives notifications. It tells the peer how many bytes of data the local TCP receive buffer can hold so that the peer can control the speed at which the data can be sent.
  • 16-bit checksum: the sum is filled by the sender. The receiver performs the CRC algorithm on the TCP segment to check whether the TCP segment is damaged during transmission. Note that this validation includes not only the TCP header, but also the data part. This is also an important guarantee of reliable TCP transmission.
  • 16-bit emergency pointer: a positive offset. It is added to the value of the ordinal field to indicate the ordinal of the next byte of the last critical data. So, exactly, this field is the offset of the emergency pointer relative to the current ordinal number, let’s call it the emergency offset. The TCP emergency pointer is a method by which the sender sends emergency data to the receiver.

23. What is the difference between HTTP status code 301 and 302?

  • 301 (Permanent move) The requested page has been permanently moved to a new location. When the server returns this response (as a response to a GET or HEAD request), the requester is automatically forwarded to the new location.
  • 302 :(temporary mobile) the server is currently responding to requests from web pages in a different location, but the requester should continue to use the original location for future requests. This code is similar to the 301 code that responds to GET and HEAD requests and automatically redirects the requester to a different location.

The difference between HTTP status code 301 and 302 is as follows:

  1. The key difference between them is whether the resource is effective;
  2. 301 resource is still in only a change of location, return is the content of the new location;
  3. The resource is temporarily invalid and a temporary replacement page is returned.

24. The retransmission mechanism of TCP

Timeout retransmission

TCP implements the retransmission mechanism in order to realize reliable transmission. The most basic retransmission mechanism is timeout retransmission. When data packets are sent, a timer is set. If an ACK packet is not received within a certain period of time, the device resends the packet.

And this interval, what is the normal interval? Let’s look at RTT (round-trip Time).

RTT is the time it takes a packet from the time it was sent to the time it came back, so it’s the round-trip time of the packet. Retransmission Timeout, or RTO for short.

How long does the RTO take?

  • If the RTO is small, it is likely that no data will be lost and will be resended, which will cause network congestion and lead to more timeouts.
  • If the RTO is large, it will not be effective to wait for the flowers to fade and not reemit.

In general, RTO is slightly larger than RTT, and the effect is the best. Some people ask, is there a formula for the timeout? Some! There’s a standard formula for calculating RTO, also known as Jacobson/Karels. Let’s look at the formula for calculating RTO

1. Calculate SRTT first (calculate smooth RTT)

SRTT = (1 - α) * SRTT + α * RTT // Find the weighted average of SRTTCopy the code

2. Calculate RTTVAR (round-trip time variation)

RTTVAR RTTVAR + = (1 - beta) * beta * (| RTT - SRTT |) / / computational SRTT and the gap between the real valueCopy the code

3. Final RTO

RTO = µ * SRTT + ∂ * RTTVAR  =  SRTT + 4·RTTVAR  
Copy the code

Among them, α = 0.125, β = 0.25, μ = 1, ∂ = 4, these parameters are the optimal parameters obtained from a large number of results.

However, timeout retransmission has these disadvantages:

  • When a packet segment is lost, the system waits for a certain timeout period before retransmitting the packet, increasing the end-to-end delay.
  • When a packet segment is lost and the wait times out, the following situation may occur: After the packet segment is received by the receiver but cannot be acknowledged for a long time, the sender considers that the packet segment is also lost, resulting in unnecessary retransmission, which wastes resources and time.

Also, TCP has a policy that the timeout interval doubles. Timeout retransmission takes a long time. Therefore, a fast retransmission mechanism can also be used.

The fast retransmission

Fast retransmission mechanism, which is not time driven, but data driven. It initiates a retransmission based on feedback from the receiver.

Let’s take a look at the fast retransmission process:

The sender sent 1,2,3,4,5,6 pieces of data:

  • The first Seq=1 is sent first, so Ack back 2;
  • The second Seq=2 is also sent, assuming that it is also normal, so ACK returns 3;
  • The third Seq=3 was not delivered due to other reasons such as network;
  • The fourth Seq=4 was also sent, but it was not received because of Seq3. So ACK back 3;
  • The following Seq=4 and 5 were also sent, but ACK still replied 3, because Seq=3 was not received.
  • When the sender receives three repeated ACK=3 (actually there are four, but the first ONE is normal ACK and the last three are redundant ACK), it knows which packet segment is lost during transmission and retransmits the packet segment before the timer expires.
  • At this point, because Seq= 4,5,6 have been received, so ACK back 7.

But there is also a problem with fast retransmission: the ACK only tells the sender the largest ordered segment. Which packet is missing? Not sure! How many packets should be retransmitted?

Retransmit Seq3? Or retransmit Seq3, Seq4, Seq5, Seq6? Because the sender does not know who sent the three consecutive ACK3s.

Retransmission with selection confirmation (SACK)

To solve the problem of fast retransmission: how many packets should be retransmitted? TCP provides the SACK method (retransmission with Selective Acknowledgment, Selective Acknowledgment).

The SACK mechanism is that the receiver returns the sequence number range of the most recently received packets on the basis of fast retransmission. This allows the sender to know which packets the receiver did not receive and which packets to retransmit. The SACK tag is added to the TCP header options field.

As shown in the preceding figure, the sender receives the same ACK packet of 30 for three times, and triggers the fast resend mechanism. According to the SACK information, it finds that only the data segment 30 to 39 is lost, so it selects only the TCP packet segment 30 to 39 to resend.

D-SACK

The D-SACK, also known as Duplicate SACK, is an extension of SACK. It is mainly used to tell the sender which packets have been repeatedly accepted. The purpose of DSACK is to help the sender determine whether packet dissequence, ACK loss, packet duplication, or pseudo-retransmission has occurred. TCP can better do network flow control. Here’s a picture:

25. What are the categories of IP addresses?

In short, IP address = network number + host number.

  1. Network number: This identifies the network to which the host (or router) is connected, and the network address indicates which network belongs to the Internet
  2. Host number: it identifies the host (or router), and the host address indicates which host in the network it belongs to

IP addresses are classified into five categories: A, B, C, D, and E:

  • Class A address (1 to 126) : Starts with 0. The first eight digits are the network number and the last 24 digits are the host number.
  • Class B address (128 to 191) : Starts with 10. The first 16 bits are the network number and the last 16 bits are the host number.
  • Class C address (192 to 223) : Starts with 110, the first 24 digits are the network number, and the last eight digits are the host number.
  • Class D addresses (224 to 239) : Reserved bit-multicast addresses that start with 1110.
  • Class E address (240 to 255) : The address starts with 11110 and is reserved for future use

26. Talk about THE TCP sliding window

TCP sends one data. The next data is sent only after the acknowledgement is received. This has a disadvantage, is the efficiency will be relatively low.

It’s like when we’re talking face to face, you say something, I say something, and then you say something. So, if I was busy with other things and could not reply you in time. After you finish a sentence, you have to wait until I am busy to reply to you, you say another sentence, which is obviously not realistic.

To solve this problem, TCP introduces Windows, a cache space created by the operating system. The window size value represents the maximum value at which data can continue to be sent without waiting for an acknowledgement.

The TCP header has a field called WIN, which is the 16-bit window size. It tells the other party how many bytes of data can be contained in the local TCP receive buffer, so that the other party can control the speed of sending data, thus accomplishing the purpose of flow control.

So basically, when the recipient receives a packet, when they send an acknowledgement packet, they also tell the sender how much free space they have in their cache, how much free space they have in the buffer, and that’s called the size of the receive window. This is WIN.

There are two types of TCP sliding Windows: sending window and receiving window. The sliding window of the sender consists of four parts, as follows:

  • The ACK has been sent and received
  • Sent but received no ACK acknowledgement
  • Not sent but can be sent
  • If you have not sent it, you cannot send it

  • The dashed rectangle is the send window.
  • Snd. WND: indicates the size of the sending window. The number of squares in the dashed box in the preceding figure is 14.
  • Snd.una: An absolute pointer to the sequence number of the first byte sent but not acknowledged.
  • Snd.nxt: Next sent location, which points to the sequence number of the first byte that is not sent but can be sent.

The receiver’s sliding window consists of three parts, as follows:

  • Successful receipt and confirmation
  • Data not received but acceptable
  • Data that cannot be received without receiving data

  • The dashed rectangle is the receive window.
  • Rev.wnd: indicates the size of the receiving window. The grid in the dashed box is 9.
  • Rev.nxt: The next receiving position, which points to the sequence number of the first byte not received but available.

27. What are the network protocols corresponding to each layer in the five-tier computer network architecture?

28. Talk about TCP traffic control

With the TCP three-way handshake, the sender and receiver enter the ESTABLISHED state and are now happily transmitting data.

But the sender can’t send data frantically to the receiver, because if the receiver can’t receive it, the receiver can only keep the data in the cache. If the cache is full and the sender continues to send data frantically, the receiver will have to throw away all the packets it receives, wasting network resources.

TCP provides a mechanism for the sender to control the amount of data sent based on the actual receiving capacity of the receiver. This is called flow control.

TCP controls traffic through sliding Windows. Let’s look at the brief flow of flow control:

First, the two sides shake hands three times and initialize their respective window sizes, which are both 400 bytes.

  1. If the current sender sends 200 bytes to the receiver, then, the sender’sSND.NXTIt moves 200 bytes to the right, which means that the current available window is reduced by 200 bytes.
  2. Rev.wnd =400-200=200 bytes, so win=200 bytes is returned to the sender. The receiver puts 200 bytes of the narrowed sliding window in the header of the ACK
  3. The sender sends another 200 bytes, 200 bytes arrive, and continues to put it on the buffer queue. But at this point, because of the heavy load, the receiver can’t process that many bytes, can only process 100 bytes, the remaining 100 bytes continue to the buffer queue. At this time, rev.wnd = 400-200-100=100 bytes, i.e. win=100 returns to the sender.
  4. The sender continues to work, sending 100 bytes, at which point the receive window win changes to 0.
  5. The sender stops sending packets, starts a scheduled task, and asks the recipient every once in a while to continue sending packets until win is greater than 0.

29. How does ARP work?

ARP The Address Resolution Protocol is used to map IP addresses to MAC addresses.

  1. First, each host creates an ARP list in its ARP buffer to represent the mapping between IP and MAC addresses.
  2. Before sending a packet to the destination host, the source host checks whether the MAC address corresponding to the IP address exists in its ARP list. If so, the packet is sent directly to the MAC address; If no, it sends an ARP broadcast packet to the local network segment to query the MAC address of the destination host. The ARP request packet contains the IP address of the source host, hardware address, and destination host.
  3. After receiving the ARP request, all hosts on the network check whether the destination IP address in the packet is the same as their own IP address. If not, the packet is ignored. If they are the same, the host first adds the MAC and IP addresses of the sender to its ARP list. If the IP addresses already exist in the ARP table, the host overwrites them and then sends an ARP response packet to the source host, telling the source host that it is the MAC address to be queried.
  4. After receiving the ARP response packet, the source host adds the IP address and MAC address of the destination host to its ARP list and uses the information to start data transmission. If the source host does not receive any ARP response packet, the ARP query fails.

30. Talk about TCP congestion control

Congestion control is applied to the network to prevent excessive data packets from being injected into the network and to avoid excessive network load. Its main goal is to maximize the bandwidth of the bottleneck links on the network. How is it different from flow control? Flow control is applied to the receiver to control the transmission speed according to the actual receiving capacity of the receiver to prevent packet loss.

We can think of a network link as a water pipe. If we want to make the best use of the network to transmit data, it is to make the water pipe full as quickly as possible.

The sender maintains a variable, CWND (congestion Window), which estimates the amount of data (water) that can be carried and transported on the link (water pipe) over a period of time. Its size represents the level of network congestion, and it changes dynamically, but in order to achieve maximum efficiency, how do we know how efficient this pipe is?

An easy way to do this is to increase the amount of water being transmitted until the pipe is about to burst (which corresponds to packet loss on the network), as described by TCP:

As long as there is no congestion in the network, the value of the congestion window can be increased to send more packets, but as long as there is congestion, the value of the congestion window should be reduced to reduce the number of packets injected into the network.

In fact, there are several commonly used algorithms for congestion control

  • Slow start
  • Congestion avoidance
  • Congestion occurs
  • Fast recovery

Slow start algorithm

Slow start algorithm, which on the surface means, take your time. It means that after the TCP connection is established, instead of sending a large amount of data at the beginning, it first detects the congestion level of the network. Increase the size of the congestion window gradually from small to large. If no packet loss occurs, the CWND size of the congestion window is increased by 1 (MSS) for each ACK received. In each round, the sending window doubles exponentially. If there is packet loss, the congestion window is halved and the congestion avoidance stage is entered.

  • The TCP connection is complete and CWND = 1 is initialized, indicating that data can be transmitted in one MSS unit size.
  • Each time an ACK is received, the CWND is added by one;
  • The CWND doubles every time an RTT is passed; It goes up exponentially

To prevent network congestion caused by CWND growth, you need to set a slow start threshold (SSthresh) state variable. When CWND reaches this threshold, it is as if the water pipe has been turned down, reducing congestion. That is, when CWND > SSthRESH, the congestion avoidance algorithm is entered.

Congestion avoidance algorithm

In general, the slow start threshold SSthRESH is 65535 bytes after the CWND reaches the slow start threshold

  • For each ACK received, CWND = CWND + 1/ CWND
  • When each RTT passes, CWND = CWND + 1

Obviously, this is a linear algorithm to avoid excessive network congestion.

Congestion occurs

When packet loss occurs in network congestion, there are two situations:

  • RTO timeout retransmission
  • The fast retransmission

If an RTO timeout retransmission occurs, the congestion generation algorithm is used

  • Slow start threshold sSHThresh = CWND /2
  • CWND is reset to 1
  • Enter the new slow start process

It’s really been decades of hard work, once back before liberation. There is a better way to do this, which is to quickly retransmit. When the sender receives three consecutive repeats of the ACK, it quickly retransmits the ACK without waiting for the RTO to timeout.

Slow start thresholds ssthRESH and CWND change as follows:

  • Congestion window size CWND = CWND /2
  • Slow start threshold ssthRESH = CWND
  • Enter the fast recovery algorithm

Fast recovery

Fast retransmission and fast recovery algorithms are usually used together. The fast recovery algorithm believes that the fact that there are three more duplicate ACK’s is a sign that the network is not that bad, so there is no need to be as strong as the RTO timeout.

As stated earlier, before entering fast Recovery, CWND and Sshthresh had been updated:

- cwnd = cwnd /2
- sshthresh = cwnd
Copy the code

Then, the really fast algorithm is as follows:

  • cwnd = sshthresh + 3
  • Retransmit the duplicate ACK packets (i.e. the lost packets)
  • If a duplicate ACK is received, then CWND = CWND +1
  • If new data is received after ACK, CWND = SSHthresh. Because an ACK of new data is received, the recovery process is over and you can enter the congestion avoidance algorithm again.

31. What are the common application-layer protocols corresponding to TCP and UDP?

The application layer protocols based on TCP include HTTP, FTP, SMTP, TELNET, and SSH

  • HTTP: HyperText Transfer Protocol. The default port is 80
  • FTP: File Transfer Protocol, default port (20 for data Transfer, 21 for control information Transfer)
  • SMTP: Simple Mail Transfer Protocol. The default port is 25
  • TELNET: Teletype over the Network. The default port is 23
  • SSH: Indicates Secure Shell. The default port is 22

Udp-based application layer protocols: DNS, TFTP, and SNMP

  • DNS: Domain Name Service. Default port 53
  • TFTP: Trivial File Transfer Protocol. The default port is 69
  • SNMP: Simple Network Management Protocol. The Trap messages are received through UDP port 161. Only Trap messages are received through UDP port 162.

32. Relationship between half-connection queues and SYN Flood attacks

Before TCP enters the three-way handshake, the server changes from the CLOSED state to the LISTEN state and internally creates two queues: the HALF-connection queue (SYN queue) and the full connection queue (ACCEPT queue).

What is a half-connection queue? What is an ACCEPT queue? Recall the diagram of the TCP three-way handshake:

  • During the TCP three-way handshake, the client sends a SYN to the server. After receiving the SYN, the server replies with an ACK and SYN. The state changes from LISTEN to SYN_RCVD, and the connection is pushed into the SYN queue, or half-connection queue.
  • When the client replies with an ACK and the server receives it, the three-way handshake is complete. The connection waits to be picked up by the specific application, and before it is picked up, it is pushed to the ACCEPT queue, the full connection queue.

A SYN Flood attack is a typical Denial of Service (DoS) attack. In a short period of time, the ATTACK forges non-existent IP addresses and sends a large number of SYN packets to the server. After the server replies with a SYN+ACK packet, it does not receive an ACK packet. As a result, a large number of half-connection queues are set up on the server and the half-connection queue is full. As a result, normal TCP requests cannot be processed.

Syn cookie and SYN Proxy firewall are available.

  • The syn cookies: After receiving a SYN packet, the server calculates a cookie value as the sequence number of its own SYNACK packet based on the source ADDRESS, port and other information of the packet. After replying with SYN+ACK, the server does not immediately allocate resources for processing. After receiving the ACK packet from the sender, If yes, the connection is established. Otherwise, the packet is discarded.

  • SYN Proxy firewall: The server firewall will Proxy and respond to each RECEIVED SYN packet and maintain half-connection. After the SENDER returns the ACK packet, the sender reconstructs the SYN packet and sends it to the server to establish a real TCP connection.

33. Why use a MAC address when you have an IP address?

  • In short, the IP address and MAC address are commonly used to identify a computer on a network. However, the IP address of a computer can be changed by the user, which is relatively difficult to manage. The MAC address cannot be changed, so the IP address and MAC address are generally used together.
  • How about just using MAC addresses instead of IP addresses? No good! At the earliest, MAC addresses appeared first, and at that time, only MAC addresses were used instead of IP addresses. Later, with more and more devices in the network, the whole routing process became more and more complex, so the concept of subnet emerged. For packets with destination addresses in other subnets, the route only needs to send the packets to that subnet.
  • So why use an IP address? Because the IP address is associated with regional, for the protection of the equipment in the same subnet an IP address prefix is same, so the router by IP address prefix to know where the equipment is online, and only use the MAC address, the router will need to remember each MAC address in which subnet, this need a router to have great storage space, It can’t be done.
  • The IP address can be used as the address, and the MAC address is the recipient. In a communication process, neither is necessary.

34. Talk about the use of a retention timer

In addition to the time-waiting timer, TCP also has a Keepalive timer. Consider this scenario: The client has proactively established a TCP connection with the server. But then the client’s host suddenly fails. Obviously, the server can no longer receive data from the client. Therefore, something should be done to make the server not wait any longer. This is where the retention timer comes in.

The server resets the keepalive timer every time it receives data from the client, which is usually set for two hours. If no data is received from the client for two hours, the server sends a probe packet segment, and then sends the probe packet every 75 seconds. If there is no response from the client after 10 consecutive probe segments are sent, the server considers the client to be faulty and closes the connection.

Talk about ARP

ARP, the address resolution protocol, is a TCP/IP protocol that obtains a MAC physical address from an IP address.

What is an IP address and what is a MAC address?

  • IP address: An Internet protocol address. It is a uniform address format provided by the IP protocol. It assigns a logical address to each network and host on the Internet to mask the differences in physical addresses.
  • MAC address: Ethernet address or physical address, which is an address used to confirm the location of a network device.

Why do you need ARP?

  • In the network access layer, if a host on the same LAN wants to communicate with another host, it needs to locate the host by MAC address before sending packets.
  • In the network layer and transport layer, computers locate the destination host by IP address. The corresponding data packets contain only the IP address of the destination host, but no MAC address.
  • Therefore, before sending packets, you need to obtain the MAC address based on the IP address, and then send the packets to the correct destination host. This acquisition process is completed through THE ARP protocol.

ARP workflow

When host A and host B want to communicate, the workflow is as follows:

  1. Query the local ARP cache table to check whether IP addresses and their MAC addresses exist.
  2. If the MAC address of host B is not matched, host A broadcasts an ARP request group on the LAN. All hosts on the LAN receive the request group.
  3. Host B receives the packet, finds that the IP address in the packet matches host B, and adds the IP and MAC addresses of host A to the local ARP cache table.
  4. Host B responds to host A with A packet containing its MAC address.
  5. After receiving the packet, host A adds THE IP and MAC addresses of host B to the ARP cache table.

36. TCP bonding and unpacking

TCP is a stream-oriented, unbounded stream of data. TCP layer does not understand the specific meaning of the upper business data, it can package according to the actual situation of TCP buffer, so that in our business, a complete package may be TCP split into multiple packages to send, it is also possible to multiple small bag packaging into a large packets to send, this is called TCP package and unpacking.

Why do sticky and unpacking occur?

  • The data to be sent is smaller than the size of the TCP send buffer. If TCP sends the data written to the buffer multiple times, packet sticking occurs.
  • If the application layer of the receiving data terminal does not read the data in the receiving buffer in time, sticky packets will occur.
  • If the data to be sent is larger than the remaining space of the TCP send buffer, unpacking will occur.
  • If the data to be sent is greater than the MSS (maximum packet length), TCP unpacks the data before transmission. That is, TCP packet length -TCP header length >MSS.

Solution:

  • The sender encapsulates each packet as a fixed length
  • Add special characters to the end of the data for segmentation
  • Divide the data into two parts, one is the head, one is the content body; The header structure has a fixed size and a field declares the size of the content body.

37. What’s the difference between “forward” and “redirect”?

  • In the Forward mode, the client and the browser only issue a request, Servlet, HTML, JSP or other information resources, and the second information resource responds to the request. In the request object, the objects saved in the request object are shared for each information resource.
  • Redirect refers to two HTTP requests, in which the server responds to the first request by sending the browser a second request to a different URL.

Here’s a popular example:

  • Direct forwarding is equivalent to: “A asks B to borrow money, B says no, B asks C to borrow money, and will pass the message to A whether it is borrowed or not”;
  • Indirect forwarding is equivalent to: “A asks B to borrow money, B says no, so A asks C to borrow.” 支那

Look at these two pictures to make it a little easier to understand:

  • How Redirect works:

  • How forward works

38. Nagle algorithm and delayed validation

Nagle algorithm

What do you think would be the problem, dear friends, if the sender was frantically sending small packets to the receiver, like one byte?

In TCP/IP, no matter how much data is sent, the protocol header is always added before the data. At the same time, the peer party also needs to send an ACK to acknowledge the received data. In order to make the most of the network bandwidth, TCP always wants to send as much data as possible. The Nagle algorithm is designed to send as many large chunks of data as possible, avoiding the network being flooded with small chunks of data.

The basic definition of Nagle algorithm is: at any time, there can be at most one unidentified segment. The so-called “small segment” refers to a data block smaller than the MSS size. The so-called “unacknowledged” refers to a data block that is sent but does not receive an ACK from the recipient confirming that the data has been received.

Implementation rules of Nagle algorithm:

  • If the packet length reaches MSS, the packet is allowed to be sent.
  • If it contains a FIN, it is allowed to send.
  • If the TCP_NODELAY option is set, the send is allowed.
  • If the TCP_CORK option is not set, the packet is allowed to be sent if all small packets (packets shorter than the MSS length) are acknowledged.
  • If none of the above conditions is met, but a timeout occurs (typically 200ms), it is sent immediately.

Delay to confirm

If the receiver just receives the packet from the sender, it receives the second packet in a very short period of time. Is it better for the receiver to reply one by one or to reply together?

After receiving the data packet, if there is no data to send to the peer, the receiver can wait for a period of time before confirming (the default value is 40ms on Linux). If there is data to be transmitted to the peer end during this period, the ACK is transmitted along with the data, rather than sending a separate ACK. If no data needs to be sent within the specified period, an ACK is also sent to prevent the peer end from thinking that packets are lost.

However, in some scenarios, the confirmation cannot be delayed. For example, out-of-order packets are found, the packet size is larger than one frame is received, and the window size needs to be resized.

In general, Nagle algorithm and delayed acknowledgment cannot be used together. Nagle algorithm means delayed sending, and delayed acknowledgment means delayed receiving, which causes more delay and can cause performance problems.

39. The difference between URIs and urls

  • The Uniform Resource Identifier (URI) is a Uniform Resource Identifier (URI) that uniquely identifies a Resource.
  • URL (full name is Uniform Resource Location), the Chinese translation is Uniform Resource Location, the main function is to provide the path of resources.

To use a classic analogy, a URI is like an ID card that uniquely identifies a person, while a URL is more like an address that can be used to find that person.

40. What is a digital signature? What is a digital certificate?

If you know how Https works, you know digital certificates. To avoid public keys being tampered with, digital certificates are introduced as follows:

Digital Certificate Composition

  • The public key and personal information are encrypted by the Hash algorithm to form a message digest. Take the message digest to a trusted Certification Authority (CA) and encrypt it with its private key to form a digital signature.
  • The public key, personal information, and digital signature together constitute a digital certificate.

41. What is SQL injection? An example?

SQL injection is a code injection technique commonly used to attack Web applications. It passes some special parameter characters into the Web application interface to deceive the application server and execute malicious SQL commands to obtain the system information illegally. It is currently one of the most commonly used means for hackers to attack databases.

How is SQL injection attacked?

Take a common business scenario as an example: Enter an employee name in the search box of a Web form, and then query the employee with the corresponding name in the background.

In this scenario, the front-end page generally sends a name parameter name to the background, and then the background queries the results through SQL

Name = ""; Select * from staff where name=" + name; // Query the database staff table according to the name parameter passed by the front endCopy the code

Because SQL is directly concatenated, if we fully trust the parameters passed by the front end. If ” or ‘1’=’1′, SQL will be mispronounced.

select * from staff where name='' or '1'='1';
Copy the code

This SQL will find out all the employee information, so the user has exceeded his authority. The requester has access to all employee information. The information has been exposed.

How can I prevent SQL injection problems

1). Use #{} instead of ${}

In MyBatis, using #{} instead of ${} can greatly prevent SQL injection.

  • because# {}Is an argument placeholder that is automatically added with “” for string types, but not for other types. Because Mybatis adoptsprecompiledThe following parameters are no longer compiled by SQL, so SQL injection is somewhat prevented.
  • The ${}Is it a simple string replacement, whatever the string is, it will parse into what is the risk of SQL injection

2). Do not expose some unnecessary log or security information, such as avoid directly responding to some SQL exception information.

If an EXCEPTION occurs in SQL, do not expose this information to the user. You can customize the exception to respond

3). Do not believe any external input parameters and filter some database keywords contained in the parameters

Can add a parameter check filter method, filter union, or database keywords

4). Proper access control

Before you query information, verify that the current user has this permission. For example, when the implementation of the code, you can let the user more than one enterprise Id, or to obtain the session information of the current user, before the query, first check whether the current user is under the enterprise, and so on, is to have the permission to query employees.

42. What are DoS, DDoS, DRDoS attacks?

  • DOS: Denial of Service (Denial of Service). Any attack that causes DOS behavior is called a DOS attack. The most common DoS attacks are computer network broadband attacks and connectivity attacks.
  • Distributed Denial of Service (DDoS) : Distributed Denial of Service (DDoS). When multiple attackers in different locations simultaneously attack one or more targets, or an attacker takes control of multiple machines in different locations and uses these machines to simultaneously attack the victim. Common DDos attacks include the SYN Flood, Ping of Death, ACK Flood, and UDP Flood.
  • DRDoS: Distributed Reflection Denial of Service (DDOS) is a Denial of Service attack that involves sending a large number of data packets with the IP address of the victim to an attacking host, and then the host responds to the IP address source in large numbers.

43. The difference between WebSocket and Socket

  • Socket = IP address + port + protocol.

Specifically, Socket is a set of standards that complete a high encapsulation of TCP/IP, masking network details to facilitate better network programming for developers.

  • WebSocket is a persistence protocol. It is a protocol that came along with HTTP5 to solve the problem that HTTP does not support persistent connections.
  • Socket is a standard interface for network programming, while WebSocket is an application-layer communication protocol.

44. Functions of ICMP

ICMP,Internet Control Message Protocol.

  • ICMP is a connectionless protocol used to transmit error reporting control information.
  • It is a very important protocol, which is of great significance to network security. It belongs to the network layer protocol, mainly used to transfer control information between the host and the router, including error report, exchange restricted control and status information.
  • When IP data cannot be accessed to the target or the IP router cannot forward packets at the current transmission rate, the system automatically sends AN ICMP message.

Ping, for example, is based on ICMP.

45. Process and principle of Http request

HTTP is a hypertext transfer protocol based on TCP/IP protocol to transfer data. The data types transmitted are HTML files, picture files, etc. To visit Baidu there is an example, look at the next Http request process

  1. The client performs DNS domain name resolution and obtains the corresponding IP address
  2. Based on this IP address, find the corresponding server to establish a connection (three-way handshake)
  3. Initiates an HTTP request (a complete HTTP request packet) after establishing a TCP connection
  4. The server responds to the HTTP request, and the client gets the HTML code
  5. The client parses the HTML code and renders the page with the resources in the HTML code, such as JS, CSS, images, and so on.
  6. Server closes TCP connection (four waves)

46. How ping works

Ping, Packet Internet Groper, is an Internet Packet explorer, used to test the number of network connections. Ping is a service command at the application layer of the TCP/IP network architecture. It mainly sends Internet Control Message Protocol (ICMP) request packets to a specific destination host to test whether the destination is reachable and learn about its status

Generally speaking, ping can be used to check whether the network is connected. It works on the ICMP protocol. Suppose machine A ping machine B, and the working process is as follows:

  1. Ping the system to create an ICMP request packet in a fixed format
  2. ICMP, the packet is packaged with the IP address of target machine B and forwarded to the IP protocol layer
  3. The IP layer protocol uses the local IP address as the source address, the IP address of machine B as the destination address, and some other control information to build an IP packet
  4. Obtain the MAC address of target machine B.
  5. The data link layer constructs a data frame. The destination address is the MAC address transmitted from the IP layer, and the source address is the MAC address of the local host
  6. After receiving the MAC address, machine B compares the target address with the MAC address of its own machine. If yes, it will return. If no, it will discard the MAC address.
  7. The round-trip time is calculated based on the timestamp in the ICMP reply packet returned by the destination host
  8. The final result includes the IP address sent to the destination host, the number of packets sent, received, and lost, and the minimum, maximum, and average round trip time

47. How to resolve if a large number of CLOSE_WAIT states occur on the server?

Let’s remember TCP’s four waves

  • Description The server received the message from the clientFINAfter that, the TCP stack will automatically send an ACK and then enterCLOSE_WAITState.
  • However, if the server does not perform the socket’s close() operation, then the LAST_ACK cannot be entered, resulting in the CLOSE_WAIT state for a large number of connections
  • Therefore, if the server has a large number of CLOSE_WAIT states, it is usually a Bug in the program, or the socket is not closed in time.

48. What are CSRF attacks and how can they be avoided

What is a CSRF attack?

CSRF, or cross-site Request forgery, simply means that an attacker steals your identity and sends malicious requests in your name. In contrast to cross-site scripting (XSS), where XSS exploits the user’s trust in a given site, CSRF exploits the site’s trust in the user’s web browser.

How does CSRF attack?

Let’s look at this example.

    1. Tom logged in to the bank, but did not exit. The browser contains Tom’s identity information in the bank.
    1. Hacker Jerry included a fake transfer request in the post
    1. Tom reads the post while still logged in to the bank’s website
    1. Send the fake transfer request, along with the identity information, to the bank’s website
    1. The bank website saw the identity authentication information and thought it was Tom’s legitimate operation, which finally caused Tom’s capital loss.

How do I resolve CSRF attacks

  • Check the Referer field. The HTTP header has a Referer field that identifies the address from which the request came.
  • Example Add a verification token.

49. How does RARP work?

  • Address resolution Protocol (ARP) is a protocol that allows a device to obtain an unknown physical address from an IP address it knows.
  • RARP(Reverse Address Translation Protocol) works in the opposite way to ARP. RARP sends a physical address for reverse resolution and expects to return its corresponding IP address. The reply includes the IP address sent by the RARP server that can provide the required information. (For non-disk drives)

RARP works as follows:

  1. The sending host sends a local RARP broadcast in which it claims its MAC address and asks any RARP server that receives the request to assign an IP address.
  2. After receiving the request, the RARP server on the local network segment checks the RARP list and searches for the IP address corresponding to the MAC address.
  3. If so, the RARP server sends a response packet to the source host and provides the IP address for the other host to use.
  4. If it does not exist, the RARP server does not respond.
  5. The source host receives the response information from the RARP server and uses the obtained IP address to communicate. If no response is received from the RARP server, the initialization fails.

50. What is DNS resolution process?

The Domain name System (DNS) is a distributed database that maps domain names and IP addresses on the Internet. Its function is very simple, is can according to the domain name to find out the corresponding IP address.

The parsing process is as follows:

  1. First, check the browser cache, look for the corresponding IP address, find the direct return; Otherwise the next step.
  2. The request is sent to the local DNS server and queried in the cache of the local DNS server. If the request is found, it is returned directly. Otherwise, the next step is performed.
  3. The local DNS server sends requests to the root DNS server, and the root DNS server tells the local DNS server which TOP-LEVEL DNS server to query for.
  4. The local DNS server sends a query request to the TOP-LEVEL DNS server. The top-level DNS server tells the local DNS server which domain name server to search for.
  5. The local DNS server sends a query request to the permission DNS server, and the permission DNS server informs the local DNS server of the IP address corresponding to the requested domain name.
  6. Finally, the local DNS server tells the host to request the IP address corresponding to the domain name.

For example, to query the IP address of www.baidu.com:

  • First of all, in the browser cache, whether to find the corresponding IP of www.baidu.com, find the direct return; Otherwise the next step.
  • The request is sent to the local DNS server and queried in the cache of the local DNS server. If the request is found, it is returned directly. Otherwise, the next step is performed.
  • The local DNS server sends a request to the root DNS server, and the root DNS server returns a list of IP addresses of the top-level DNS server responsible for.com.
  • The local DNS server then sends a request to one of the.com top-level DNS servers and returns the IP address list of the authoritative.baidu DNS server.
  • The local DNS server then sends a request to one of the authoritative DNS servers and returns the IP address corresponding to www.baidu.com.

Reference and Thanks

  • HTTP long and short connections (most popular ever!)
  • The difference between cookies and sessions
  • Detailed description of the working mechanism of ARP
  • ARP protocol workflow
  • Do all the computer network interview questions in one essay