Ps: the front end is very need to do it, I also have a general understanding of the next, there are mistakes please point out. tks~
HTTP
HTTP1.1 version
The current version of HTTP1.1 is widely used in the market. Prior to 1.1, there were versions 0.9 in 1991 and 1.0 in 1996. Version 1.1 is richer and better than the previous version, but there are still some issues that need to be addressed.
1. Compared to 1.0, a TCP connection is established for each HTTP request. 1.1 introduced persistent connections. TCP connections are not closed by default and can be reused by multiple requests (you need to know that creating a TCP connection is expensive).
2. The piping mechanism (pipelining), pipelining is the client can make multiple HTTP requests at the same time, and in the process of sending does not need to wait for the server to respond to a request before, but, the client or in the order send the request to receive the response, that is to say, if the first request took the server a lot of processing time, As a result, all subsequent requests wait for the first one to complete. As a result, there is a head-to-line blocking, and many browsers default to turning off the pipeline mechanism. TCP connection, but only one request can be processed at the same time in a connection. The other request can only be blocked before the current request ends. Modern browsers open up to six connections for a single domain and send requests through each connection! HTTP2 solves this problem relatively
Speaking of blocking at the head of the queue, let me just say a few more words:
HTTP1.0: For the same TCP connection, all requests are queued, and only the response to the previous request is received before the next request can be sent. As you can see, the first block of HTTP1.0 occurs on the client side.
HTTP1.1: Multiple requests can be sent at once for the same TCP connection, that is, the next request can be sent without waiting for the previous response to be received, which eliminates the HTTP1.0 client-side queue head blocking. However, HTTP1.1 states that the server-side response is queued according to the order in which the request was received, that is, the response to the request received first is also sent. The problem with this is that if the processing time of the first received request is long and the response generation is slow, it will block the delivery of the response that has already been generated. It also causes the front of the queue to jam. As you can see, HTTP1.1’s queue head blocking occurs on the server side.
HTTP2 version
1. The header compression
2. Multiplexing
To understand a few concepts:
In the past, the key to HTTP performance optimization was not high bandwidth, but low latency. TCP connections “tune” themselves over time, limiting the maximum speed of the connection at first and increasing the speed of the transfer over time if the data is successfully transferred. This tuning is called TCP slow start
Frame: The smallest unit of HTTP2 data communication message, each frame is identified by the corresponding stream. Binary frame parsing is handy compared to HTTP1.x text transfer parsing
Stream: A virtual channel that exists in a connection. You can treat each request as a stream consisting of frames
On a TCP connection, we can send frames to each other over and over again, the stream of each frame identifies which stream it belongs to, and then, when received by the other party, we can concatenate all the frames of each stream according to that stream to form a single block of data. By treating each request as a stream, multiple requests become multiple streams, the request response data is split into multiple frames, and frames from different streams are sent to each other interleaved. This is multiplexing in HTTP2.
The concept of flow implements multiple request-response parallelism on a single connection and solves the problem of line head blocking
3. Server push
The browser sends a request, and the server actively pushes resources associated with the request to the browser so that the browser doesn’t have to make subsequent requests.
In general, HTTP2 requires HTTPS
HTTP2 vs HTTP1.X
1. All communication is completed on a single connection, without the need to open a separate TCP connection, which effectively solves the problem of high latency
2. Header compression volume is smaller, reducing overhead
3. Effectively remove the blocking problem at the front of the queue
4. Use binary format, not text format
5. The server actively pushes a response
TCP
The three-handshake, four-wave TCP cliche is often asked in front-end interviews. I used to know a little bit about it and didn’t understand it well. Of course the front end needs to understand also do not need too deep. After all, we’re just cutting pictures. Without further ado, above:
Three-way handshake
SYA/ACK: flag bit, only 1 or 0
SYN: 1 Initiates a connection
ACK: 1: acknowledge receipt
Seq: a packet with a random serial number
Ack: indicates the acknowledgement number, which indicates the acknowledgement of the packet and the expectation of receiving the packet next time
Note that: (A) Do not confuse the ack of the confirmation number with the ACK of the flag bit (B) The ack of the acknowledgement = seq+1 of the initiator, both ends are paired
General process:
The client initiates a connection with SYN=1 and sends a packet with SEq = X (TCP states that SYN=1 must send a serial number packet)
ACK=1 of the server confirms receipt. ACK= x+1 means that we have received the data packet up to X, and we expect the client to send me a data packet with SEQ of X +1 next time. SYN=1 initiates connection and sends a data packet with SEQ = Y
The client ACK=1 confirms receipt. Ack =y+1 means that we have received the packet up to Y and send a packet with SEq =x+1
Ok, ~
Easy to:
A: Can you hear me?
B: I can hear you. Can you hear me?
A: I can hear you, too.
Four times to wave
Three waves, you might ask, why four? Look at the picture:
The core is actually a second step, to ensure the integrity of the data transfer. Other steps are similar to three waves.
Common interview questions 2:
Why can’t we just shake hands twice?
The following uses client A and server B as an example. Assume that A sends data packet A1 to B, but B does not receive it due to the lag of the transmission link, and A does not receive B’s confirmation of the data packet A1. Then A sends data packet A2 to B. At this time, if B receives data packet A1, B returns an acknowledgement packet B1 to A. However, SINCE A has discarded the A1 packet, it cannot confirm the B1 packet, thus maintaining the “zombie” connection
HTTPS
HTTP vs HTTPS
- HTTP is plaintext transmission, while HTTPS is encrypted with SSL\TLS, making it more secure
- The HTTP port number is 80 and the HTTPS port number is 443
- HTTPS requires you to apply for a digital certificate from the CA, which usually requires a fee
The SSL handshake
Quote ruan Yifeng
In the first step, Alice gives the protocol version number, the Client random number generated by a Client, and the encryption method supported by the Client.
In the second step, Bob confirms the encryption method used by both parties and presents the digital certificate, along with a Server random number.
In the third step, Alice confirms that the digital certificate is valid, then generates a new random number (Premaster secret), encrypts this random number using the public key in the digital certificate, and sends it to Bob.
Fourth, Bob uses his private key to obtain the random number sent by Alice (Premaster secret).
In the fifth step, Alice and Bob use the first three random numbers according to the agreed encryption method to generate “session keys”, which are used to encrypt the whole process of the following dialogue.
Digital certificates are generally used to ensure that the public key belongs to the server and to prevent man-in-the-middle modification.
Symmetric and asymmetric keys and digital certificates
1. Symmetric keys
Encryption and decryption use the same key
The downside of this symmetric key is that it can be intercepted by middlemen for snooping and tampering
2. Asymmetric keys
An asymmetric key (RSA asymmetric encryption) is a pair of private keys and a public key. The private key is kept secret and used for decryption, while the public key encrypts data. Only the corresponding public key can decrypt the data encrypted with the private key, and only the corresponding private key can decrypt the data encrypted with the public key.
The drawback of this asymmetric key is that RSA algorithm is slow. There is also the possibility that the middleman intercepts the public key, and if it intercepts and modifies the public key, then Big Fat would be embarrassed if he tried to send bill a message. So look for a trusted CA organization to get a digital certificate (CA private key + Bill public key (server public key) + some basic information encrypted generation). Then take the communication between terminal C and server as an example:
The server sends the digital certificate to the C terminal. The C terminal checks the certificate manager in the browser and searches for the public key to unlock the digital certificate according to trusted Root Certification Authorities. If there is one, the C terminal unlocks the digital certificate and obtains the server public key to encrypt data for communication.