preface
Welcome to our GitHub repository Star: github.com/bin39232820… The best time to plant a tree was ten years ago, followed by now
Tips
The interview guide series, which in many cases does not delve into the details, is a way for students to review their knowledge in the role of the interviewee, so I assume that most of the things you, as the interviewer, know.
www.processon.com/view/link/6…
This is the brain map address
Where t
Computer network foundation, in fact, we develop a very important knowledge point, today we have a look at its interview questions
Then below is a summary of previous articles
- 2021-Java Backend Engineer Interview Guide (Introduction)
- 2021-Java Backend Engineer Interview Guide
- 2021-Java Backend Engineer Interview Guide -(Concurrency – Multithreading)
- 2021-Java Backend Engineer Interview Guide -(JVM)
- 2021-Java Backend Engineer Interview Guide -(MySQL)
- 2021-Java Backend Engineer Interview Guide -(Redis)
- Java Backend Engineer Interview Guide -(Elasticsearch)
- 2021-Java Backend Engineer Interview Guide -(Message Queue)
- 2021-Java Backend Engineer Interview Guide -(SSM)
- 2021-Java Backend Engineer Interview Guide (SpringBoot+SpringCloud)
- 2021-Java Backend Engineer Interview Guide -(Distributed Theory +Zookeeper)
Talk about the differences between Http1.0,Http1.1 and Http2.0
Some differences between HTTP1.0 and HTTP1.1
- HTTP1.0 mainly uses if-modified-since,Expires in the header. HTTP1.1 introduces more cache control policies such as Entity tag. If-unmodified-since, if-match, if-none-match, etc.
- The Host header processing, in HTTP1.0, assumes that each server is bound to a unique IP address, so the URL in the request message does not pass the hostname. However, with the development of virtual hosting technology, there can be multiple virtual hosts (multi-homed Web Servers) on a physical server, and they share the same IP address. HTTP1.1 both Request and response messages should support the Host header field, and an error (400 Bad Request) will be reported if there is no Host header field in the Request message.
- HTTP 1.1 supports long Connections and Pipelining processing that delivers multiple HTTP requests and responses over a SINGLE TCP connection, reducing the cost and latency of establishing and closing connections. Connection: keep-alive is enabled by default in HTTP1.1, somewhat compensating for the fact that HTTP1.0 creates a Connection on every request.
SPDY: optimization of http1.x
- Reduce latency. For HTTP’s high latency problem, SPDY elegantly takes multiplexing. Multiplexing can share a TCP connection through multiple request streams, which solves the problem of HOL blocking, reduces latency and improves bandwidth utilization.
- HTTPS based encryption protocol transmission, greatly improves the reliability of transmitted data.
New features of HTTP2.0 compared to HTTP1.x
- The new Binary Format, http1.x parsing is text-based. There are natural defects in format parsing based on text protocol. There are various forms of text expression, and many scenarios must be considered in order to achieve robustness. Binary is different, only recognizing the combination of 0 and 1. Based on this consideration HTTP2.0 protocol parsing decision to adopt binary format, implementation is convenient and robust.
What is the difference between multiplexing in HTTP2.0 and long connection multiplexing in HTTP1.x?
-
HTTP/1.* Once request-response, establish a connection, close when used up; Each request establishes a connection;
-
HTTP/1.1 Pipeling (Pipeling) is a multi-threaded process that queues up multiple requests to perform any of the following operations at any time.
-
HTTP/2 Multiple requests can be executed simultaneously in parallel on a single connection. A request task is time-consuming and does not affect the normal execution of other connections.
A lot of what happens when you type www.taobao.com into your browser to buy something.
You type it in the browserwww.taobao.comThis is a URL. The browser only knows the name”www.taobao.com, but I do not know the exact location, so I do not know how to visit it. So it opens the address book to look it up. You can use the generic address book protocol DNS to do this, or you can use another, more precise address book lookup protocol HTTPDNS.Whichever way you look it up, you end up with this address: 113.96.109.100. This is the IP address, the “house number” of the Internet world.
Knowing the destination address, the browser begins to package its request. For ordinary browsing requests, HTTP is often used. However, for shopping requests, encrypted transmission is usually required, so HTTPS is used. Whatever the agreement is, it will say “what you want to buy and how much”.
The layer where DNS, HTTP, and HTTPS reside is called the application layer. After application layer encapsulation, the browser will be the application layer package to the next layer to complete, through socket programming to achieve. The next layer is the transport layer. There are two protocols at the transport layer: connectionless protocol UDP and connection-oriented protocol TCP. For payments, TCP is often used. Connection-oriented means that TCP guarantees that the packet will reach its destination. If it does not arrive, it resends until it arrives. TCP has two ports, one for the browser to listen to, and one for the e-commerce server to listen to. The operating system often uses ports to determine which process it should give the package to.Once the transport layer is wrapped, the browser passes the package to the operating system’s network layer. The protocol at the network layer is IP. In IP protocol, there will be the active IP address, that is, the IP address of the browser machine and the target IP address, that is, the IP address of the e-commerce site server.
Now that the operating system knows the destination IP address, it starts to figure out how to find the target machine based on the number. The operating system will often determine whether the destination IP address is local or out-of-town. If it’s a local, you can tell by the number, but obviously the e-commerce site is not local, but far away.
The operating system knows to go far away from home. Although I do not know where the distance is, but can be like this: if you go to the customs, to go to the gateway. When the operating system is started, the DHCP protocol is used to configure the IP address and the default gateway IP address 192.168.1.1.
How does the operating system send the IP address to the gateway? In the local communication basic rely on roar, so the operating system loud roar, who is 192.168.1.1 ah? The gateway will answer, I am, my local address is at the east end of the village. The local address is a MAC address, and the yell is ARP.
The operating system then passes the IP packet to the next layer, the MAC layer. The nic then sends the packet. Because the packet contains a MAC address, it can reach the gateway.
After receiving the packet, the gateway determines the next step based on its own knowledge. A gateway is usually a router, and the route to an IP address is called a routing table.
The routing table is often aware of this “knowledge” because the routing table often communicates with neighboring routing tables. This communication protocol is called routing protocol, OSPF and BGP are commonly used.
The last gateway knows where the network packet is going. So, shout out to the country, who is the target IP? The target server responds with a MAC address. After the network packet is cleared, the target server can be found through this MAC address.
The target server finds a MAC address match, removes the MAC header and sends it to the network layer of the operating system. When the IP addresses match, remove the IP header. The IP header contains a layer that encapsulates THE TCP protocol and then passes it to the transport layer, the TCP layer.
In this layer, for each packet received, there is a reply indicating that the packet was received. This reply packet is not the result of the order request, such as whether the purchase was successful, how much money was deducted, etc., but only a description of the TCP layer, that is, the reply after receiving. Of course, this reply will walk back along the direction just come, to report peace.
If the packet does not arrive after a period of time, the sending TCP layer will re-send the packet, the same process as above, until a safe reply is received one day. This retry is not a re-request of the action that your browser will order again. In the case of the browser, an order is sent and the TCP layer retries itself. Unless something goes wrong with TCP, such as a broken connection, the browser’s application layer resends the order request.
After the network packet reaches the TCP layer safely, there is a target port number in the TCP header. By this port number, it can be found that the process of the e-commerce site is listening to this port number, assuming a Tomcat, and sends the packet to the e-commerce site.
The e-commerce site process gets the HTTP request and knows what to buy and how much to buy. Often the Tomcat that an e-commerce site initially receives a request is just a receptionist who handles the request as a whole, rather than doing everything himself. For example, the receptionist would tell the special order management process, register to buy an item, how much to buy, tell the inventory management process, how much to reduce inventory, tell the payment process, how much to pay, and so on.
What if you tell the relevant process? This is usually done through RPC calls, or remote procedure calls. A remote procedure call is when the administrative order process is told, the receptionist does not have to worry about the intermediate network interconnection issues, which are handled by the RPC framework. There are many kinds of RPC frameworks, some are based on HTTP protocol in HTTP packets, some are directly encapsulated in TCP packets
When the receptionist finds that the appropriate department has finished processing, she replies with an HTTPS package, indicating that the order has been successfully placed. The HTTPS package, as it arrived, will make its way to your PC, and eventually to the browser, showing the payment was successful.
A simple purchase process with so many agreements involved. Managing a large number of machines is a particularly technical undertaking. In addition, technologies such as cloud computing, containers and microservices, which have become popular recently, also require agreements to achieve large-scale collaboration between machines.
How do you check the IP of the current server
Ha ha, this problem is not very simple, not engaged in technology may know, there are the following ways
- On Windows it’s ipconfig
- On Linux, it’s ifconfig.
- The other one on Linux is IP addr
What is the difference between ifconfig and IP addr? Sometimes it’s hard to dig deep in interviews because many people don’t work the first time. Feel free to leave a comment below.
Dynamic Host Configuration Protocol (DHCP)
If a network administrator has configured a DHCP Server on the network, he is the administrator of these IP addresses. He knew immediately that a “new man” had arrived. At this point, we can appreciate the unique importance of MAC addresses. When a machine joins a network with its MAC address, the MAC is its only identity, and if even this is repeated, it cannot be configured. Only the MAC is unique, the IP administrator can know that this is a new user and needs to rent it an IP address, a process we call DHCPOffer. At the same time, the DHCP Server reserves the IP address provided for this client and does not assign this IP address to other DHCP clients.
When you visit www.taobao.com, the packet you send needs to be translated into public IP through NAT, and the data returned by Taobao needs to be changed from public IP to your LAN IP. Assuming that there are many IP addresses in my LAN, how can he know which one corresponds to which one
NAT includes not only IP addresses but also port numbers for address replacement. To be specific, in the packets that we request to connect to the extranet server, besides the source and destination IP addresses, there are also source and destination port numbers. The destination port number is fixed, such as 21 or 80. But the source port number is randomly generated. When a packet reaches the NAT device, the private IP address is replaced with a public IP address, and the port number is also replaced with a port number randomly generated by NAT. NAT port numbers correspond to hosts on the LAN, and the NAT device maintains a table of port numbers and hosts. When the extranet server returns data to the NAT device, the NAT device uses the port number in the returned data packet to find the host on the LAN and forwards the data. In this way, the complete realization of the LAN host on the external network function.
Describe the process of establishing secure communication over Https
Let us summarize the HTTPS connection process (vernacular) the first is the client send SSL version and some encryption component to the server, the server received a returned to the client, and then send messages to other, is the first round is the client to send a, the service side sent three times, after the client and server to confirm each other, Finally, a secure connection is established.
Talk about the TCP header format
- First of all, the source and destination port numbers are indispensable if these two port numbers are not available. You don’t know which application to send the data to.
- Next is the serial number of the package. Why do you number the bags? In order to solve the disorder problem, of course. How can we figure out which one should come first and which one should come last without numbering. Numbering is to solve out-of-order problems. Since I am an old driver of the society, I should do things steadily, one by one. No matter how complicated the situation is, I will not be in a mess.
- There should also be a confirmation number. There should be an acknowledgement on the package, otherwise how will I know if it has been received? If it is not received, it should be resend until it is delivered. This can solve the problem of not losing packets. As an old driver, do things of course to rely on the spectrum, promised to do, temporarily do not also want to have a reply.
- And then there are some state bits. For example, SYN initiates a connection, ACK replies, RST reconnects, and FIN terminates a connection. TCP is connection-oriented, so both parties need to maintain the state of the connection. The sending of packets with status bits will cause the state change of both parties.
- Another important thing is window size. TCP to do flow control, communication both sides of the declaration of a window, identify their current ability to handle, don’t send too fast, support me, also don’t send too slow, starve me.
Tell me about the three handshakes that never change
- First handshake: When establishing a connection, the client sends a SYN packet (SYN = X) to the server and enters the SYN_SENT state, waiting for confirmation from the server. SYN: Indicates the Synchronize Sequence number.
- Second handshake: After receiving a SYN packet, the server must acknowledge the client’s SYN (ACK = X +1) and send a SYN packet (ACK = Y). In this case, the server enters the SYN_RECV state.
- Third handshake: After receiving the SYN+ACK packet from the server, the client sends an ACK packet (ACK = Y +1) to the server. After the packet is sent, the client and the server enter the ESTABLISHED state (TCP connection is successful) and complete the three-way handshake.
Why not twice?
The three-way handshake performs two important functions, both by preparing the parties to send the data (both parties know that they are ready) and by allowing the parties to negotiate the initial serial number, which is sent and confirmed during the handshake.
Now instead of three handshakes requiring only two handshakes, deadlocks can occur. As an example, consider the communication between computers S and C. Suppose C sends A connection request packet to S, which receives the packet and sends an acknowledgement reply packet. Following the two-handshake protocol, S considers that the connection has been successfully established and can start sending packets of data. However, in the case that THE reply packet of S is lost in transmission, C will not know whether S is ready, do not know what sequence number S establishes, and C even doubts whether S has received its connection request packet. In this case, C considers that the connection has not been established successfully, and ignores any data sent by S, and only waits for the connection to confirm and reply the grouping. S repeatedly sends the same packet after the sent packet times out. So that’s what happens
Talk about the cliche of four waves
-
The client process sends a connection release packet and stops sending data. Release the header of the data packet, FIN=1, whose sequence number is SEq = U (equal to the sequence number of the last byte of the previously transmitted data plus 1). At this point, the client enters the fin-WaIT-1 state. According to TCP, FIN packets consume a sequence number even if they do not carry data.
-
After receiving the connection release packet, the server sends an acknowledgement packet with ACK=1, ACK= U +1 and its serial number seq= V. In this case, the server enters close-wait state. The TCP server notifies the higher-level application process that the client is released from the direction of the server. This state is half-closed, that is, the client has no data to send, but if the server sends data, the client still accepts it. This state also lasts for a period of time, i.e. the duration of the close-wait state.
-
After receiving the acknowledgement request from the server, the client enters the fin-WaIT-2 state and waits for the server to send a connection release packet (before receiving the final data from the server).
-
After sending the LAST data, the server sends a connection release packet with FIN=1 and ACK = U +1 to the client. The server is probably in the semi-closed state. Assume that the serial number is SEQ = W, then the server enters the last-ACK state and waits for the client’s confirmation.
-
After receiving the connection release packet from the server, the client sends ACK=1, ACK= W +1 and its serial number is SEq = U +1. In this case, the client enters the time-wait state. Notice That the TCP connection is not released at this time, and the client can enter the CLOSED state only after 2∗∗MSL (maximum packet segment life) and the corresponding TCB is revoked.
-
The server enters the CLOSED state immediately after receiving an acknowledgement from the client. Similarly, revoking the TCB terminates the TCP connection. As you can see, the server ends the TCP connection earlier than the client.
The client
- FIN_WAIT_1
- FIN_WAIT_2
- TIME_WAIT
- CLOSED
The service side
- CLOSE_WAIT
- LAST_ACK
- CLOSED
Why does the TIME_WAIT state take 2MSL to return to CLOSE?
Although logically, all four packets are sent and we can directly enter the CLOSE state, we must pretend that the network is unreliable and the last ACK can be lost. Therefore, the TIME_WAIT state is used to resend ACK packets that may be lost. The Client sends the last ACK reply, but the ACK may be lost. If the Server does not receive an ACK, it repeatedly sends FIN fragments. Therefore, the Client cannot shut down immediately. It must confirm that the Server received the ACK. The Client enters the TIME_WAIT state after sending an ACK. The Client sets a timer and waits for 2MSL of time. If a FIN is received again within that time, the Client resends the ACK and waits for another 2MSL. The so-called 2MSL is twice the MSL(Maximum Segment Lifetime). MSL refers to the maximum lifetime of a fragment in the network. 2MSL refers to the maximum time required for a send and a reply. If the Client does not receive a FIN again until 2MSL, the Client concludes that the ACK has been successfully received and terminates the TCP connection.
How does TCP ensure reliable transmission
- Application data is divided into data blocks that TCP considers best for sending.
- TCP numbers each packet it sends. The receiver sorts the packets and sends the ordered data to the application layer.
- Checksum: TCP keeps the checksum of its head and data. This is an end-to-end checksum to detect any changes in the data during transmission. TCP discards the segment and does not acknowledge the receipt of the segment if the segment is checked or incorrect.
- The TCP receiver discards duplicate data.
- Flow control: Each side of the TCP connection has a fixed buffer space. The TCP receiver only allows the sender to send data that the buffer of the receiver can accept. When the receiver does not have time to process the data of the sender, the sender can be prompted to reduce the sending rate to prevent packet loss. TCP uses a variable-size sliding window protocol for flow control. (TCP uses sliding Windows to achieve flow control)
- Congestion control: Reduces data transmission when the network is congested.
- ARQ protocol: It also aims to achieve reliable transmission. Its basic principle is to stop sending each packet and wait for the confirmation of the other party. Send the next packet after receiving confirmation.
- Timeout retransmission: When TCP sends a segment, it starts a timer and waits for the destination to acknowledge receipt of the segment. If an acknowledgement cannot be received in time, the packet segment is resend.
The end of the
Vip hung in the network, little 66 also know, they are all back eight-part essay, indeed for the network of their own insights are not deep, a lot of things said very empty, but the interview can not be done, China’s software environment is such, but little 66 is still better after good attack their own foundation. And we encourage each other, come on!
Daily for praise
Ok, everybody, that’s all for this article, you can see people here, they are real fans.
Creation is not easy, your support and recognition, is the biggest motivation for my creation, we will see in the next article
Wechat search “six pulse Excalibur program life” reply 888 I find a lot of information to you