This article is mainly for HTTP status code, HTTP 1.0, HTTP1.1, HTTP2.0, HTTPS encryption and decryption, TCP three-way handshake principle and other issues. It is not easy to share. If you can click a like to support 😁, thank you.

The difference between GET and POST requests

Application scenario: GET is used to obtain data, and POST is used to submit data security: GET puts parameters into the URL, which is not secure because the URL is kept in the history record. Packet format: In the GET request packet, the entity part is empty. In the POST request packet, the entity part is the data to be sent. Request length: The length of the data sent in the GET request varies depending on browser restrictions. Cache or not: Browsers generally cache GET requests, but rarely POST requests. Parameter types: The parameter passing of POST supports more data types.

The difference between POST and PUT requests

PUT: Simply PUT, updating a resource is non-idempotent. POST: Simply put, creates a resource, idempotent.

In the HTTP specification, POST is non-idempotent, and multiple calls can have different results, such as creating a user more than once for network or other reasons. PUT id/1 creates a user with id 1. Multiple calls create the same result, so PUT is idempotent.

The URL length of the GET method is limited

In fact, the HTTP protocol specification does not limit the LENGTH of THE URL requested by the GET method. It is restricted by the browser and server.

Internet Explorer limits the URL length to 2083 bytes (2K+35). Since Internet Explorer has the minimum allowed value for URL length, as long as the URL does not exceed 2083 bytes during development, it will work in all browsers without a problem.

Internet Explorer: contains 2083 characters. Chrome: Contains 8192 characters. Other browsers: Contains more than 8192 characters

Mainstream servers also limit the LENGTH of URLS in GET methods to 8192 characters or more

Differences between HTTP 1.0 and HTTP 1.1

Connections: HTTP1.0 uses non-persistent connections by default, while HTTP1.1 uses persistent connections by default. Http1.1 uses persistent connections to reuse the same TCP connection for multiple HTTP requests to avoid the latency required to establish a connection each time a non-persistent connection is used.

Resource request: In HTTP1.0, there are some waste of bandwidth, such as the client only needs a part of an object, and the server sends the whole object, and does not support resumable breakpoint function, http1.1 in the request header introduced the range header field, which allows only a part of the resource request. The return code is 206 (Partial Content), which makes it easy for developers to make the most of bandwidth and connections.

Caching: Http1.0 mainly uses if-Modified-since and Expires in the header as the criteria for cache judgment. Http1.1 introduces more cache control policies. There are more alternative cache headers such as Etag, if-unmodified-since, if-match, if-none-match, and so on to control the cache strategy. Http1.1 added the host field to specify the domain name of the server. Http1.0 assumes that each server is bound to a unique IP address, so the URL in the request message does not pass a hostname. However, with the development of virtual host technology, there can be multiple virtual hosts on a physical server, and they share the same IP address. Hence the host field, which allows you to send requests to different sites on the same server. Http1.1 has many new request methods, such as PUT, HEAD, OPTIONS, etc.

Differences between HTTP 1.1 and HTTP 2.0

Binary protocol: : HTTP/2 is a binary protocol. In HTTP/1.1, the header must be text (ASCII encoding), and the data body can be either text or binary. HTTP/2 is a complete binary protocol, header information and data body are binary, and collectively called “frame”, can be divided into header information frame and data frame. The concept of frame is the basis of its multiplexing.

Multiplexing: HTTP/2 implements multiplexing. HTTP/2 still multiplexes TCP connections, but within a connection, both the client and the server can send multiple requests or responses at the same time, and do not send them sequentially, thus avoiding queue congestion.

Data flow: HTTP/2 uses the concept of data flow because HTTP/2 packets are sent out of order, and consecutive packets within the same connection may belong to different requests. Therefore, the packet must be marked to indicate which request it belongs to. HTTP/2 refers to all packets of data for each request or response as a data stream. Each data stream has a unique number. Data flow ids must be marked when packets are sent to distinguish which data flow they belong to.

Header compression: HTTP/2 implements header compression, and since the HTTP 1.1 protocol has no state, all information must be attached to each request. Therefore, many fields of the request are repeated, such as Cookie and User Agent. The same content must be attached to each request, which will waste a lot of bandwidth and affect the speed. HTTP/2 optimizes this by introducing header compression. On the one hand, the header information is compressed using gzip or COMPRESS and then sent. On the other hand, both the client and the server maintain a header table in which all fields are stored, generating an index number, and then not sending the same fields, but only the index number, which can increase speed.

Server push: HTTP/2 allows the server to send resources to the client unsolicited. This is called server push. Use server push to push the necessary resources to the client in advance, thus reducing the latency. It is important to note that the server actively pushes static resources under HTTP/2, which is different from WebSocket and SSE methods such as sending real-time data to the client.

Header blocking is caused by the basic REQUEST-reply model of HTTP. HTTP specifies that packets must be received on a first in, first out (FIFO) queue. Requests in the queue have no priority, only the order in which they were queued. The first requests are processed first. If the head of the queue is delayed because it is being processed too slowly, then all subsequent requests in the queue have to wait too, resulting in other requests incurring undue time costs, resulting in queue congestion.

The difference between HTTP and HTTPS

HTTPS requires a CA certificate, which costs a lot. The HTTP protocol does not; HTTP is a plaintext transmission protocol, while HTTPS is a secure SSL encryption transmission protocol. The HTTP port is 80 and the HTTPS port is 443. HTTP connections are simple and stateless; HTTPS is a network protocol that uses SSL and HTTP to encrypt transmission and authenticate identity. It is more secure than HTTP.

What happens when you type the URL into the browser and press Enter?

URL parsing: The SYSTEM first parses the URL and analyzes the transport protocol to be used and the path of the requested resource. Illegal characters are escaped

Cache judgment: If a valid cache exists, use the cache directly, otherwise make a new request to the server.

DNS resolution: Obtain the IP address by performing the following steps: Check whether the local DNS server has cache -> No -> Send a request to the root DNS server -> Obtain the IP address

Obtain the MAC address: To obtain the MAC address of the destination ADDRESS, determine whether the destination address and the current device reside on the same subnet. If no, the gateway needs to forward the MAC address.

TCP Three-way handshake: Establishes a TCP connection.

HTTPS handshake: If HTTPS is used, there is a four-way handshake of TLS before communication. First, the client sends the version number of the protocol used, a random number and the encryption method available to the server. After receiving it, the server confirms the encryption method and sends a random number and its own digital certificate to the client. After receiving the certificate, the client checks whether the certificate is valid. If yes, the client generates a random number, encrypts the random number with the public key in the certificate, and sends the certificate to the server. In addition, the client provides a hash value of all previous contents for the server to verify. After receiving the data, the server uses its own private key to decrypt the data and sends a hash value of all previous contents to the client for verification. At this time, both sides have three random numbers. According to the previously agreed encryption method, these three random numbers are used to generate a secret key. In the future, before the two sides communicate, they will use this secret key to encrypt data and then transmit it.

Return data: When a page request is sent to the server, the server returns an HTML file as a response. Upon receiving the response, the browser parses the HTML file to start the page rendering process.

Page rendering: The browser will first build a DOM tree based on the HTML file, and then a CSSOM tree based on the parsed CSS file. If script tags are encountered, whether the endpoint contains defer or async attributes, otherwise the loading and execution of script will cause page rendering to block. Once the DOM and CSSOM trees are established, the render tree is built from them. Once the render tree is built, the layout is based on the render tree. After the layout is complete, the page is drawn using the browser’S UI interface. At this point the entire page appears.

TCP quadruple wave: The last step is the quadruple wave process in which TCP disconnects. If the client considers the data delivery complete, it needs to send a connection release request to the server. After receiving the connection release request, the server tells the application layer to release the TCP connection. The ACK packet is then sent and the CLOSE_WAIT state is entered. In this case, the connection between the client and the server is released and the data sent by the client is no longer received. But because the TCP connection is bidirectional, the server can still send data to the client. If the server still sends data, it sends a connection release request to the client. Then the server enters the last-ACK state. After receiving the release request, the client sends an acknowledgement reply to the server. In this case, the client enters the time-wait state. This state lasts for 2MSL (maximum segment lifetime, which refers to the duration of the packet segment in the network. The timeout will be discarded). If no server resends the request within this period, the server enters the CLOSED state. When the server receives an acknowledgement, it enters the CLOSED state.

Understanding keep-alive

The default in HTTP1.0 is that on each request/reply, the client and server create a new connection and disconnect immediately after completion. This is called a short connection. When the keep-alive mode is used, the keep-alive function keeps the connection from the client to the server Alive. When there is a subsequent request to the server, the keep-alive function prevents the establishment or re-establishment of the connection, which is called a long connection. Its use method is as follows:

  • HTTP1.0 does not have keep-alive by default (that is, keep-alive is sent by default), so you must manually configure sending to Keep the connection aliveConnection: keep-aliveField. If you want to disconnect, sendConnection:closeField;
  • HTTP1.1 specifies the default keep-long connection. When data transfer is complete, keep TCP connections open and wait for data transfer to continue over this channel under the same domain name. If it needs to be closed, the client needs to send itConnection: closeHeader field.

Advantages:

  • Less CPU and memory usage (due to fewer connections open at the same time);
  • Allow HTTP pipelining of requests and replies;
  • Reduced congestion control (TCP connections are reduced);
  • Reduced latency for subsequent requests (no handshake required);
  • You do not need to close the TCP connection when reporting an error.

Disadvantages:

  • For a long period of timeTCPThe connection may cause invalid occupation of system resources and waste system resources.

How does HTTP load a page with multiple images?

HTTP 1: the maximum number of TCP connections to a domain name is 6. Therefore, the browser requests TCP connections multiple times. You can solve this problem by deploying multiple domain names. This can increase the number of simultaneous requests and speed up page image retrieval.

HTTP2: A lot of resources can be loaded at once because HTTP2 supports multiplexing and can send multiple HTTP requests within a SINGLE TCP connection.

What are the cache-relevant HTTP request headers

Strong caching: Expires, cache-control

Negotiation cache: Etag, if-none-match, last-modified, if-modified-since

What is the HTTPS protocol?

Hypertext Transfer Protocol Secure (HTTPS) is a transport Protocol for Secure communication over computer networks. HTTPS communicates over HTTP and encrypts data packets using SSL/TLS. The main purpose of HTTPS is to provide identity authentication for web servers and protect the privacy and integrity of exchanged data.

HTTP transmits information in plaintext, which may lead to information eavesdropping, information tampering, and information hijacking. TLS/SSL, however, provides the functions of identity authentication, information encryption, and integrity verification to avoid such problems.

The main responsibility of the security layer is to encrypt the data of the HTTP request and decrypt the received HTTP content.

How TLS/SSL works

TLS/SSL is a secure transport layer protocol between TCP and HTTP. It does not affect the existing TCP and HTTP protocols. Therefore, HTTP pages do not need to be modified in HTTPS mode.

TLS/SSL relies on three basic algorithms: hash, symmetric encryption, and asymmetric encryption. The functions of these three algorithms are as follows:

  • Asymmetric encryptionRealize identity authentication and key negotiation
  • Symmetric encryption algorithms use negotiated secret key pairs of dataencryption
  • Based on hash function validation informationintegrity

Asymmetric encryption

Asymmetric encryption is where we have two secret keys, a public key and a private key. The public key is public and the private key is secret. Only the corresponding public key can decrypt the data encrypted with the private key, and only the corresponding private key can decrypt the data encrypted with the public key. We can publish the public key, and any customer who wants to communicate with us can use the public key provided by us to encrypt the data, so that we can use the private key to decrypt the data, so that the data can be safe. However, asymmetric encryption has a disadvantage that the encryption process is very slow. Therefore, if asymmetric encryption is used for each communication, it will cause a long wait time.

Common asymmetric encryption algorithms include RSA, ECC, and DH. The secret key comes in pairs, usually called a public key (public) and a private key (private). Public-key encryption only the private key can decrypt the information, only the public key and private key encrypted information so mastering the public key cannot decrypt the information each other, between the client and server is encrypted only communication, the server can realize the one-to-many communication, the client can also be used to validate master private key to the identity of the server.

Features: Asymmetric encryption is characterized by one-to-many information. The server only needs to maintain a private key to communicate with multiple clients, but the information sent by the server can be decrypted by all clients. In addition, the calculation of this algorithm is complex and the encryption speed is slow.

Based on the characteristics of the above algorithms, TLS/SSL works by using asymmetric encryption to communicate with the server, authenticate the identity and negotiate the secret key used for symmetric encryption. Symmetric encryption algorithm uses negotiation secret key to encrypt information and information abstract communication, and different nodes use different symmetric secret keys to ensure that information can only be obtained by both parties. This solves the problem of both approaches.

Symmetric encryption

In symmetric encryption, both parties use the same secret key to encrypt and decrypt data. The problem with symmetric encryption, however, is how to ensure the security of the secret key transmission, because the secret key will still be transferred over the network, once someone else has access to the key, the whole encryption process is useless. This requires asymmetric encryption.

Common symmetric encryption algorithms include AES-CBC, DES, 3DES, and AES-GCM. The same secret key can be used to encrypt and decrypt messages. In order to obtain information and prevent information eavesdropping, the communication mode is one-to-one.

Features: Symmetric encryption has the advantage of one-to-one information transmission, requiring the same password to be shared. Password security is the basis for ensuring information security. When the server communicates with N clients, N password records must be maintained and passwords cannot be changed.

Hash function hash

Common hash functions are MD5, SHA1, and SHA256. This function is characterized by one-way irreversibility and is very sensitive to the input data. The length of the output is fixed. Any data modification will change the result of the hash function, which can be used to prevent information tampering and verify the integrity of data.

Features: In the process of information transmission, the hash function cannot be tamper-proof. Because the transmission is plaintext transmission, the middleman can modify the information and recalculate the summary of the information, so the transmitted information and the summary of the information need to be encrypted.

What is a digital certificate?

The current method is not necessarily secure, because there is no way to be sure that the public key obtained is a secure public key. There may be an intermediary who intercepts the public key sent to us by the other party and sends us his own public key, so that when we use his public key to encrypt the message, he can decrypt it with his private key. He then sends messages to each other in the same way, pretending to be us, so that our information is stolen and he doesn’t know it. To solve such problems, digital certificates can be used.

First, a Hash algorithm is used to encrypt the public key and other information to generate a message digest. Then, a trusted certification authority (CA) uses its private key to encrypt the message digest to form a signature. Finally, the original information and signature are combined, called a digital certificate. When receiving the digital certificate, the receiver first generates a digest based on the original information using the same Hash algorithm, then decrypts the digest in the digital certificate using the public key of the notary office, and finally compares the decrypted digest with the generated digest to find out whether the obtained information has been changed.

This method is important to the reliability of the certification center, the general browser will be built in some top-level certification center certificate, equivalent to we automatically trust them, only in this way to ensure the security of data.

HTTPS communication (handshake) process

  1. The client makes a request to the server containing the protocol version number used, a random number generated, and the encryption method supported by the client.
  2. After receiving the request, the server confirms the encryption method used by both parties, gives the server’s certificate, and a random number generated by the server.
  3. After confirming that the server certificate is valid, the client generates a new random number, encrypts this random number using the public key in the digital certificate, and sends it to the server. It also provides a hash value for all previous content that the server can verify.
  4. The server uses its own private key to decrypt the random number sent by the client. And provide the hash value of all the previous content for the client to verify.
  5. The client and the server use the first three random numbers according to the agreed encryption method to generate a dialogue secret key, and the subsequent dialogue process uses this secret key to encrypt information.

Pros and cons of HTTPS

Advantages:

  • useHTTPSProtocols can authenticate users and servers to ensure that data is sent tocorrectClient and server;
  • useHTTPSProtocol can be encrypted transmission, identity authentication, communication moresecurityTo prevent data from being stolen and modified during transmission to ensure data security;
  • HTTPSIs the most secure solution under the current architecture, although not absolutely secure, but substantiallyincreaseMan-in-the-middle attackThe cost of;

Disadvantages:

  • HTTPSNeed to do both server and client encryption decryption processing,costMore serversresources;
  • HTTPSProtocol handshake phase is time-consuming, add pagesLoading time;
  • SSLThe certificate isA feeThe more powerful the certificate, the higher the cost;
  • HTTPSThe resource occupation of the connection server side is much higher, supporting the website with slightly more visitors needs to invest more cost;
  • SSLCertificates need to be boundIPCan’t,The sameBound to an IP addressmultipleThe domain name.

How is HTTPS secure?

In combination with the two encryption modes, the symmetric encryption key is encrypted with the asymmetric encryption public key and sent. The receiver decrypts the symmetric encryption key using the private key. Then the two parties can use symmetric encryption to communicate.

This brings up another problem, the middleman problem:

If there is a middleman between the client and server, the middleman only needs to replace the public key of the communication between the two sides with his own public key, so that the middleman can easily decrypt all the data sent by the two sides.

Therefore, a secure third-party issued certificate (CA) is required to prove the identity of the identity and prevent man-in-the-middle attack. A certificate includes the issuer, certificate purpose, user public key, user private key, Hash algorithm, and certificate expiration time.

But the question is, if the middleman tampered with the certificate, would the proof of identity be invalid? The proof was bought for nothing, and a new technology was needed, digital signatures.

A digital signature is a digital signature that uses the Hash algorithm of the CA to Hash out a digest of the certificate, encrypts the certificate with the CA private key, and finally forms a digital signature. When someone sends his certificate, I use the same Hash algorithm to generate the message digest again, and then decrypt the digital signature with the CA’s public key to get the message digest created by the CA. By comparing the two, I can know whether the middle has been tampered with. At this time can maximize the security of communication.

The HTTP status code

Categories of status codes:

category why describe
1xx Informational(Informational status code) The accepted request is being processed
2xx Success(Success Status code) The request is successfully processed
3xx Redirection(Redirection status code) Additional action is required to complete the request
4xx Client Error (Client Error status code) The server cannot process the request
5xx Server Error The server failed to process the request

200 OK indicates that the request from the client is processed by the server.

204 No Content Indicates that the request sent by the client is normally processed by the server, but No Content is returned. The response packet does not contain the body of the entity. Generally used when only information needs to be sent from the client to the server, but the server does not need to send content to the client.

206 Partial Content Indicates that the client made a range request and the server performed the Partial GET request. The response message contains the entity Content in the Range specified by content-range.

A 301 permanent redirect indicates that the requested resource has been assigned a new URI and that the URI specified by the resource should be used in the future. The new URI is specified in the Location header field in the HTTP response header. If the user has already bookmarked the original URI, the bookmark will be saved according to the new URI in Location. At the same time, the search engine crawls the new content and replaces the old url with the redirected url.

Usage scenarios

  • When we want to change the domain name and the old domain name is no longer in use, users accessing the old domain name use 301 to redirect to the new domain name. In fact, is also to tell the search engine included domain name need to be included in the new domain name.
  • In the search engine search results appeared without the WWW domain name, and with the WWW domain name is not included, this time you can use 301 redirect to tell the search engine our target domain name is which one.

302 Temporary redirection indicates that the requested resource has been assigned a new URI and the user is expected to access the resource using the new URI. This is similar to 301 Moved Permanently, but 302 represents a resource that is not Permanently redirected, but only temporarily. This means that the URI of a resource that has been moved may change in the future. If the user saves the URI as a bookmark, instead of updating the bookmark as if the 301 status code appeared, the URI of the page that returned the 302 status code remains. Meanwhile, search engines grab new content and keep old urls. Because the server returns the 302 code, the search engine considers the new url to be temporary.

Usage scenarios

  • When we are doing an activity, login to the home page automatically redirects to enter the activity page.
  • Users who are not logged in access the user center and are redirected to the login page.
  • Visit 404 page redirected to home page.

304 Not Modified browser cache relevant. Represents a condition in which the server allows the request to access a resource when the client sends a conditional request, but the condition is not met. The 304 status code returned does not contain any response body. Although 304 is classified as 3XX, it has nothing to do with redirection.

Conditional request (HTTP conditional request) : Requests are made using the GET method. The request packet contains any header of if-match, if-none-match, if-modified-since, if-unmodified-since, and if-range.

The 304 status code is not an error, but rather tells the client that there is a cache and to use the cached data directly. Only the header information is returned to the page, and there is no content part, which improves the performance of the page to a certain extent.

400 Bad Request Syntax errors exist in the Request packet. When an error occurs, you need to modify the content of the request and send the request again. In addition, the browser treats the status code as if it were 200 OK.

401 Unauthorized The request to be sent requires authentication information that passes THE HTTP authentication (BASIC authentication and DIGEST authentication). If the request has been made before, the user authentication failed, for example, the login failed

403 Forbidden Indicates that the server denies the access to the requested resource. The server does not need to specify the reason, but you can specify it in the body of the response packet entity. After entering this state, authentication cannot be continued. This access is permanently disabled and tied to the application logic.

404 Not Found The requested resource could Not be Found on the server.

405 Method Not Allowed Indicates that the Method requested by the client can be identified by the server, but the server does Not allow the Method. The client can use the OPTIONS method (precheck) to view the access methods allowed by the server, as shown below

Access-Control-Allow-Methods: GET,HEAD,PUT,PATCH,POST,DELETE
Copy the code

500 Internal Server Error Indicates that an Error occurred when the Server executed the request. For example, there is a Bug

502 Bad Gateway Indicates the server that acts as the Gateway or proxy. The response received from the upstream server is invalid. Note that error 502 is usually not fixed by the client, but needs to be fixed by a passing Web server or proxy server. The following conditions will occur 502:

503 Service Unavailable Indicates that the server is temporarily overloaded or undergoing maintenance downtime and cannot process requests.

Usage Scenarios:

  • When the server is down for maintenance, take the initiative to respond to requests with 503;
  • Nginx sets the speed limit. If you exceed the speed limit, 503 is returned.

504 Gateway Timeout Indicates that the Gateway or proxy server cannot get the desired response within the specified time. It is a new addition to HTTP 1.1.

Usage scenario: Code execution times out, or an infinite loop occurs.

TCP three-way handshake

A three-way Handshake is a TCP connection that requires the client and server to send Three packets. The main function of the three-way handshake is to confirm whether the receiving and sending capabilities of both parties are normal, and to specify their initial serial numbers for the preparation of reliable transmission. In fact, it is to connect the server to the specified port, establish a TCP connection, and synchronize the serial number and confirmation number of both sides of the connection, exchange TCP window size information.

At first, the client is in the Closed state and the server is in the Listen state.

First handshake: The client sends a SYN packet to the server and indicates the ISN sequence number of the client. The client is in the SYN_SEND state.

Second handshake: After receiving a SYN packet from the client, the server responds with its OWN SYN packet and specifies its initial ISN. At the same time, the ISN + 1 of the client is used as the ACK value, indicating that the server has received a SYN from the client, and the server is in the SYN_REVD state.

Third handshake: After receiving a SYN packet, the client sends an ACK packet. Similarly, the ISN + 1 of the server is used as the ACK value, indicating that a SYN packet has been received from the server. The client is in the ESTABLISHED state. After receiving the ACK packet, the server is also in the ESTABLISHED state. In this case, a connection is ESTABLISHED.

So why three handshakes? Not twice?

  • To confirm that the receiving and sending capabilities of both parties are normal
  • If you shake hands twice, something like this happens:

For example, if the client sends a connection request but does not receive any confirmation because the connection request packet is lost, the client retransmits the connection request. Confirmation was received and a connection was established. After data transmission is completed, the connection is released. The client sends two connection request message segments. The first one is lost, and the second reaches the server. The server for the client and send a new connection request, and then send a confirmation message to the client, agreed to establish a connection, do not use three-way handshake, as long as the server send confirmation, to establish a new connection, confirmation letter from the client to ignore the service side, at this time also not send data, consistent service side waiting for the client to send data, waste of resources.

The TCP three-way handshake establishes a connection by confirming the initial sequence number and telling each other which sequence number segment can be correctly received. The third handshake is used by the client to confirm the initial sequence number of the server. If only two handshakes are used, there is no way for the server to know if its serial number has been confirmed. At the same time, this is to prevent invalid request packet segments from being received by the server and causing errors.

TCP’s four waves

Both sides are initially in the ESTABLISHED state. Suppose the client initiates the shutdown request first. The four-wave process is as follows:

First wave: The client sends a FIN packet with a specified sequence number. The client is in the FIN_WAIT1 state.

That is, it sends the connection release packet segment (FIN=1, serial number seq= U), stops sending data again, actively closes the TCP connection, and enters FIN_WAIT1 (Terminate wait 1) state, waiting for the confirmation of the server.

Second wave: After receiving the FIN, the server sends an ACK packet and uses the serial number of the client +1 as the SEQUENCE number of the ACK packet, indicating that the packet has been received. In this case, the server is in CLOSE_WAIT state.

That is, after receiving the connection release packet, the server sends the confirmation packet (ACK=1, ACK= U +1, seq= V). The server enters the CLOSE_WAIT state. In this case, TCP is in the half-closed state and the connection between the client and the server is released. After receiving the confirmation from the server, the client enters the FIN_WAIT2 state and waits for the connection-release packet segment sent by the server.

Third wave: If the server also wants to disconnect, the server sends a FIN packet with a specified sequence number as the first wave from the client. The server is in the LAST_ACK state.

If no data is sent to the client, the server sends a connection release packet (FIN=1, ACK=1, seq= W, ACK= U +1). The server enters the LAST_ACK state and waits for the confirmation.

Fourth wave: After receiving the FIN, the client also sends an ACK packet and uses the serial number of the server +1 as the sequence number of its OWN ACK packet. In this case, the client is in TIME_WAIT state. It takes a period of time to ensure that the server receives its ACK packet before it enters the CLOSED state. After receiving the ACK packet, the server is in the CLOSED state.

That is, after receiving the connection release packet from the server, the client sends an acknowledgement packet (ACK=1, SEQ = U +1, ACK= W +1) and enters the TIME_WAIT state. In this case, TCP is not released and the client enters the CLOSED state after waiting for the time set by the timer to be 2MSL.

Why do you need four waves?

After receiving a SYN request packet from a client, the server sends a SYN+ACK packet. ACK packets are used for reply, and SYN packets are used for synchronization. However, when the server receives a FIN packet, it may not close the SOCKET immediately. Therefore, the server can only reply an ACK packet to the client, saying, “I received the FIN packet you sent.” I can send FIN packets only after all packets are sent on the server. Therefore, I cannot send FIN packets at the same time. Therefore, I need to wave four times.

TCP uses the quadruple wave because TCP connections are full-duplex. Therefore, the two parties need to release the connection to the other party respectively. When the connection is released, data cannot be sent to the other party and the connection is in the semi-released state.

During the last wave, the client waits for a period of time to shut down the server. This prevents the server from being shut down properly due to the loss or error of the acknowledgement packet sent to the server.

WebSocket

WebSocket is a full-duplex communication network technology between browser and server provided by HTML5, which belongs to the application layer protocol. It is based on the TCP transport protocol and reuses the HTTP handshake channel. A handshake between the browser and the server is all it takes to create a persistent connection and two-way data transfer.

WebSocket solves the problem of half – duplex communication. Its biggest feature is: the server can actively push messages to the client, the client can also actively push messages to the server.

WebSocket principle: The client notifies the WebSocket server of an event with all the receiver ids. The server notifies all active clients immediately after receiving the event. Only clients whose ids are in the receiver ID sequence process the event.

WebSocket features are as follows:

  • Support two-way communication, more real-time
  • You can send text or you can send binary data.
  • Based on TCP protocol, the implementation of the server side is relatively easy
  • The data format is relatively light, with low performance overhead and high communication efficiency
  • There are no same-origin restrictions, and clients can communicate with any server
  • The protocol identifier is WS (or WSS if encrypted), and the server URL is the URL
  • It has good compatibility with HTTP protocol. The default ports are also 80 and 443, and the handshake phase uses HTTP protocol, so it is not easy to mask the handshake and can pass various HTTP proxy servers.

The use of Websocket is as follows:

In the client:

Let ws = new WebSocket('ws://localhost:9999'); Ws.onopen = function() {console.log("Connection open."); ws.send('hello'); }; Ws. onMessage = function(res) {console.log(res); ws. onMessage = function(res) {console.log(res); // Print the MessageEvent object console.log(res.data); // Prints the received message}; Onclose = function(evt) {console.log("Connection closed."); };Copy the code

Remember, remember (° °) Blue ✿



O (~ ▽ ~)d Don’t get lost

Favorites ✋🏻 + Follow

Thank you blue