preface
They say that if you understand how a browser works, you can solve 80 percent of the front-end problems.
Yes, it helps to know how a browser works; Knowing network protocols such as TCP/IP and HTTP will help your future career.
Below, I summarize 4 common interview questions about browser and network communication, for you to comb the browser, network communication, page rendering, JavaScript, browser security and other knowledge, so that you have a new understanding of the whole front-end and back-end system.
Q 1: Why does Chrome open a page with 4 processes?
Learn to master: Web flow in the browser, page rendering process, JavaScript execution process, and Web security theory. To understand the multi-process architecture, expand the question below:
Learning multi-process architecture
Confusion between processes and threads
For computing, a single thread is one computation after another, while multithreading is the simultaneous processing of multiple computations. Multithreading refers to multiple execution streams in a program, that is, multiple threads can run simultaneously in a program to perform different tasks, that is, it allows a single program to create multiple parallel executions.
A single thread is a flow of execution in a program, where each thread has its own proprietary registers (stack Pointers, program counters, etc.), but the code area is shared, meaning that different threads can execute the same function.
Multithreading is also a procedure, so the thread needs to occupy memory, the more threads occupy memory, the more threads need to coordinate and manage, so the NEED for CPU time tracking threads; Threads’ access to shared resources will affect each other, so the problem of competing for shared resources must be solved. Too many threads cause too much control.
Single thread in the execution of the program, the program is in accordance with the sequential order down, the front must be dealt with, will be executed behind. Multithreading is a process that has several relatively independent and specific tasks executed in the way of competing CPUS. Macroscopically, it is concurrent, but in fact, it is timeshare execution, but the execution time slice is shorter.
Each running program is a process and contains at least one thread, called the main thread, which is created when the program starts and is used to execute the main function. A program with only one main thread is called a single-threaded program. Programs that have more than one thread are called multithreaded programs.
A process is a process when a program starts to run. A process includes the running program and the memory and system resources used by the program (a process is composed of multiple threads).
The benefit of multithreading is increased CPU utilization. In multithreaded programs, if one thread has to wait, the CPU can run other threads instead of waiting, which greatly improves the efficiency of the program.
Therefore, the thread can not exist alone, it is started and managed by the process, a process is a running instance of the program. Thread is attached to process, and using multi-thread parallel processing in process can improve computing efficiency. Threads share data in a process. When a process is shut down, the operating system reclaims the memory occupied by the process.
The current multi-process architecture browser Chrome includes one main browser process, one GPU process, one web process, multiple renderers and multiple plug-in processes.
So, open a page, why there are 4 processes? Because to open a page: at least 1 network process, 1 browser process, 1 GPU process and 1 render process are required.
While the multi-process model improves the stability, smoothness, and security of browsers, it leads to a higher resource footprint and a more complex architecture. So, Chrome officials want to build a more cohesive, loosely-coupled system that is easy to maintain and expand.
Second question: how does TCP guarantee that a page file can be delivered to the browser in its entirety?
In the network, we know that a file is usually split into many packets for transmission, and packets may be lost or make mistakes in the transmission process, so it is necessary to ensure the complete delivery of the page file to the browser.
The following three aspects are described:
- How do packets reach the host
- How does the host forward packets to the application
- How is the data delivered to the application completely
When the data packet is sent from host A to host B, the IP address of host B is attached to the data packet, and the IP address of host A is added to the data structure of an IP header (including IP version, source IP address, destination IP address, and lifetime, etc.).
Transmission Control Protocol (TCP) is a connection-oriented, reliable, byte stream based transport layer communication Protocol. In the simplified COMPUTER network OSI model, it completes the functions specified by the fourth transport layer.
User Datagram Protocol (UDP) is another important transport protocol in the same layer.
In the Internet protocol family, the TCP layer is above the IP layer, TCP->IP, the middle layer below the application layer, application -> middle layer. Reliable, pipe-like connections are often needed between application layers of different hosts, but the IP layer does not provide such a streaming mechanism, but rather unreliable packets to exchange.
To prevent packet loss, TCP assigns a sequence number to each packet, and the sequence number ensures that packets sent to the receiving entity are received in sequence. The receiving entity sends back an acknowledgement (ACK) of a successfully received packet. If the sending entity does not receive an acknowledgement within a reasonable round trip delay (RTT), the packet is assumed to have been lost and retransmitted.
- The data in
TCP
Layer called flow - Grouping of data is called segmentation
TCP protocol operation: connection creation, data transfer, connection termination.
Do you know what TCP is? At this point most people would just say it’s a protocol.
TCP transmission control protocol (TCP/IP) is one of the major protocols in the Internet protocol. TCP/IP is a set of communication protocols used to connect hosts on the Internet and most other computer networks.
An agreement is a mutually agreed format for performing something. With computers, it is most commonly used to refer to a set of rules that enable computers to connect to each other and transmit data, called a communication protocol.
TCP is a connection-oriented protocol that establishes and maintains virtual connections between hosts until a message, or that of the applications to run on it, has been exchanged. A packet is the basic unit of data transmission over a TCP/IP network.
TCP operates at the transport layer and is responsible for maintaining reliable end-to-end communication across the network. IP is the network layer protocol, which is the layer directly below the transport layer. At the transport layer, UDP(User Datagram Protocol), RTP(Real-time Transport Protocol), and SCTP(Flow Control Transport Protocol) operate.
The connection to create
- TCP uses the three-way handshake process to create a connection
The three-way handshake protocol process:
A. The client sends a SYN packet to the server to proactively open the SYN packet. The packet carries A random number A set by the client for this connection request as the message column number.
B. After receiving a SYN packet, the server adds the packet to the SYN queue. Echo back a SYN/ACK. The ACK code should be A+1, and the SYN/ACK packet itself carries A randomly generated sequence number B.
C. After receiving A SYN/ACK packet, the client sends an ACK packet with the sequence number as A+1 and the ACK code as B+1. When the server receives the ACK packet, the request frame is removed from the SYN queue and placed in the ACCEPT queue.
Scenario: When the server receives a SYN from the client and replies a SYN-ACK, the client is disconnected. If the server does not receive an ACK from the client, the connection is in an intermediate state with no success or failure.
However, if the server does not receive the TCP within a certain period of time, it sends a SYN-ACK again.
- When receiving a TCP packet, the host identifies the session to which the TCP packet belongs with the IP addresses and port numbers of both ends.
- A single table is used to store all sessions, each of which is called a TCB.
- The definition of TCB structure includes: the source port used for connection, destination port, destination IP, serial number, reply serial number, the size of the other side window, the size of the already side window, TCP status, TCP input/output queue, application layer output queue, TCP retransmission variables, etc.
- The number of connections on the server side is unlimited, limited only by memory.
Data transfer
Each TCP packet segment has a pair of serial numbers and confirmation numbers.
A TCP packet sender refers to the serial number of its own byte stream and the confirmation number of the received byte stream. By using ordinals and acknowledgement numbers, the TCP layer can deliver the bytes in the received packet segments to the application layer in the correct order.
TCP uses serial numbers to identify the byte order sent by each end. When receiving data from the other end, the sequence can be reconstructed without fear of out-of-order delivery or packet loss.
Sends an acknowledgement packet containing the number of the byte stream sent by the peer party and the byte position of the data stream that has been successfully received.
The figure above shows the process of data transfer
Packet structure
Let’s take a look at the packet structure:
Includes: offset byte, source connection port, destination connection port, sequence number, confirmation number, checksum, emergency pointer, etc.
- The source connection port, 16 bits long, identifies the sending connection port
- Destination connection port, 16 bit long, identifies the receiving connection port
- Serial number (
seq
, 32 bits in length) - The confirmation number (
ack
,32 bits long), the starting sequence number of the data expected to be received, that is, the byte length of the data already received plus 1 - Data offset (4 bits long), the offset of the starting address of the data segment calculated in units of 4 bytes.
- Reserved, must be set to 0
ACK
– 1 indicates that the confirmation number field is validSYN
– a value of 1 indicates whether this is a connection request or a connection accept request. It is used to create a connection and synchronize the sequence numberFIN
– 1 indicates that the sender has no data to transmit and needs to release the connectionRST
– A value of 1 indicates that a serious error occurs. You may need to recreate the TCP connection. It can also be used to reject invalid message segments and connection requests- Emergency pointer (16 bits long) – The sequence number of the last byte of emergency data in this paragraph
- Window (
WIN
, 16 bits long) – represents the number of bytes that the sender of a message can receive, starting with the acknowledgement number, i.e. the size of the receive window. For flow control - The checksum (
Checksum
, 16 bits long) — to the wholeTCP
Packet segment, includingTCP
Header and TCP data, calculated in 16-bit words. This is a mandatory field
Remember that IP sends the packet to the destination, the packet to the host. So how does the packet get passed to the application?
UDP delivers data packets to applications.
UDP is a protocol developed on top of IP that can communicate with applications. The user data packet protocol (UDP) determines which program to send the data packet to, while IP only delivers the data packet to the other computer.
Let’s look at the position of UDP and TCP in the TCP/IP architecture:
UDP: TCP: TCP: UDP: TCP: TCP
UDP: no connection. Support one-to-one, one-to-many, many-to-one and many-to-many interactive communication; Packets delivered by the application layer are packaged directly. Deliver as best as you can, which is unreliable; Do not use flow control and congestion control; The header overhead is small, only 8 bytes.
TCP: connection-oriented; Each TCP connection can have only two endpoints (EP), which is one-to-one communication. Byte oriented stream; Reliable transmission, using flow control and congestion control; Minimum 20 bytes, maximum 60 bytes.
Let’s look at the UDP and TCP header comparison overhead:
The most important aspect of UDP is the port number, because UDP distributes data packets to the correct program through the port number. UDP cannot guarantee the reliability of data, but the transmission speed is fast.
The important point is: how is the data delivered to the application in its entirety?
TCP is about delivering data intact to the application.
TCP is a connection-oriented, reliable, byte stream based transport-layer communication protocol that provides retransmission and introduces packet sorting (TCP headers, which provide sorted serial numbers, are used to reorder packets by serial numbers).
The TCP/IP three-way handshake establishes a connection. Four waves, disconnect.
Three-way handshake:
Complete three TCP handshakes:
Girlfriend to boyfriend: “there?” Boyfriend reply girlfriend: “I am in!” Girlfriend reply boyfriend: “I know!”
The boyfriend knew.
Four waves:
Complete four waves:
The girl friend sends boyfriend: “break up!” Boyfriend reply girlfriend: “eh?” Boyfriend reply girl friend: “serious?” Girlfriend reply boyfriend: “serious!”
At this time the girlfriend deleted her boyfriend’s wechat.
With the three handshakes and four waves I described, I’m sure you get the idea, haha!
Third question: HTTP request process, why many sites open the second time quickly?
Speaking of HTTP, which is based on TCP connections, hypertext Transfer Protocol, HTTP is an application-layer protocol for distributed, collaborative and hypermedia information systems. HTTP is the basis for data communication on the World Wide Web.
Someone said: To learn the browser well, you need to know HTTP well.
The browser uses HTTP protocol as the application layer protocol to encapsulate the requested text information, and uses TCP/IP as the transport layer protocol to send it to the network (HTTP content is realized through TCP data transmission stage).
- The mapping between domain names and IP addresses is called ** Domain name System ** for short
DNS
.
The Domain name system (DNS) is a service on the Internet. As a distributed database that maps domain names and IP addresses to each other, it makes it easier for people to access the Internet.
Domain name such as dadaqianduan.cn (URL address)
IP address: XX.233.xxS.12 (access)
In the first step, the browser will request the DNS to return the IP address corresponding to the domain name. The browser also provides the DNS data cache service. If a domain name has been resolved, the browser will cache the resolved structure and use it directly in the next query, reducing the network request. After obtaining the IP address, you need to obtain the port number. If the URL does not specify the port number, the default HTTP protocol is port 80.
It’s pretty clear at this point, IP and port number. So let me talk about the HTTP protocol description, which is added here for a better understanding:
HTTP is a standard for requests and responses between a client and a server, usually using TCP. By using a web browser, crawler, or other tools, the client sends an HTTP request to a specified port on the server. The default port is 80.
The responding server stores resources, such as HTML files and images, on the source server. (The client is called the user agent), there may be multiple “middle layers” between the user agent and the source server, such as the proxy server, gateway, tunnel, etc.
So, the HTTP server listens for the client’s request at the port. Upon receiving the request, the server returns a status such as “HTTP/1.1 200 OK”, along with the returned content such as the requested file, error message, or other message.
1. Build request (build request line information); 2. Lookup caching (browser caching is a technique for saving a copy of a resource locally for direct use on the next request); 3. Prepare IP addresses and ports. 4. Wait for the TCP queue. 5. Establish a TCP connection. 6. Send an HTTP request.
The server then processes the request, and the server returns the request, disconnecting.
In Chrome, a maximum of six TCP connections can be established for the same domain name. If 10 requests occur for the same domain name, four of the requests will be queued.
If the number of requests is less than six, the TCP connection is directly established.
Sending an HTTP request
So how does the browser send requests to the server?
Here is a post request capture diagram:
A browser sends a request to the server to receive the return:
Description: When a user enters the requested URL address in the browser, the core code of the browser will split and resolve the URL, and finally send the domain to the DNS server. The DNS server will query the corresponding IP address according to the domain, and then return the IP address to the browser.
Once the browser has the IP address, it knows where the request was sent. Go through (LAN, switch, router, backbone) to the server.
For those of you who are familiar with HTTP, let’s look at the HTTP request data format (see figure above -> Post request capture) :
HTTP request data format:
The browser first sends a request line to the server (request method; The request URI. The HTTP protocol version) – what need to tell the server browser resources, commonly used method for the GET request, request header (browser to tell some information – the basis of the browser used by the operating system, browser kernel and other information, as well as the current request of domain name information, the browser Cookie information, etc.), request body (such as the commonly used POST, Used to send some data to the server, the prepared data is sent through the request body).
The server processes HTTP requests:
- Return a request;
- Disconnect;
- Redirection.
View return request data, -i, get return response line (including protocol version and status code), response header, response body data.
Typically, the server returns the request data to the client and closes the TCP connection. However, the field “Connection: keep-alive” is added in the header to Keep the TCP Connection still connected. The same TCP Connection can be sent to save the time for establishing the Connection for the next request.
Well, normally you return the request, disconnect the connection and it’s gone, but there’s one where you open the URL in your browser and you end up with a different page address because there’s a redirect.
As shown in the figure: -I indicates that only the response header and response row data need to be obtained
- The location field is the redirected address;
Difference between status code 301 and 302
301 Moved Permanently, the requested resource has been Permanently Moved to a new location, and any future references to this resource should use one of the urIs returned by this response. If possible, clients with link editing capabilities should automatically change the requested address to the one that is returned from the server. Unless otherwise specified, the response is also cacheable.
The resource requested by 302 Found now temporarily responds to the request from a different URI. Since such redirects are temporary, the client should continue to send future requests to the original address. The response is cacheable only if specified in cache-Control or Expires.
The literal difference: 301 is a permanent redirect while 302 is a temporary redirect
302 redirects are temporary redirects. The search engine will grab the new content and keep the old address. Because the server returns 302, the search engine considers the new URL to be temporary.
301 redirects are permanent redirects. The search engine will also replace the old url with the redirected url while fetching new content.
Next, let’s sort out the HTTP version number, which I’m sure you’ll want to know as we go along.
HTTP / 0.9:
Has been out of date. Only request GET is supported, and only HTML resources can be requested to access. The version number is not specified in the communication, and request headers are not supported.
HTTP / 1.0:
This was the first version of the HTTP protocol to specify a version number in communication, adding the request methods POST and HEAD; No longer limited to HTML version 0.9, multiple data formats can be supported according to the content-Type; Includes status code, multi-character set support, multi-part type, authorization, cache, and Content encoding.
Version 1.0: Each TCP connection can send only one request. When the server responds, the connection is closed. The next request requires the TCP connection to be established again.
HTTP / 1.1:
By default, continuous connections are used (TCP connections are not closed by default and can be reused by multiple requests without the declaration of Connection: keep-alive), which can work well with the proxy server.
A TCP connection can allow multiple HTTP requests
The pipeline mechanism is added to allow multiple requests to be sent simultaneously in the same TCP connection, increasing the concurrency and further improving the efficiency of the HTTP protocol
Version 1.1 makes it possible to skip the Content-Length field and use “block Transfer Encoding” instead – as long as the request or response header has the Transfer-Encoding field, it indicates that the response will consist of an undetermined number of blocks. Transfer-Encoding: chunked
Added request modes such as PUT, PATCH, OPTIONS, and DELETE
- Block transfer encoding: A data transfer mechanism in hypertext Transfer protocol that allows HTTP data sent from a web server to a client application to be divided into multiple parts
HTTP
Protocol Version 1.1 (HTTP / 1.1
).
In the same TCP connection, all data communication is carried out in sequence. The response is slow and many requests queue up, causing “queue congestion”.
HTTP / 2:
It was officially released as an Internet standard in May 2015. Add duplex mode, that is, not only the client can send multiple requests at the same time, the server can also process multiple requests at the same time, to solve the problem of queue head congestion.
Multiplexing techniques are used to concurrently process multiple requests for the same connection, and the number of concurrent requests is orders of magnitude larger than HTTP1.1.
Add server push function, without request server to actively send data to the client.
HTTP1.1 compares to HTTP1.0 protocol differences:
- Cache handling
- Bandwidth optimization and network connection usage
- Management of error notifications
- The sending of messages across the network
- Maintenance of Internet addresses
- Security and integrity
Finally finally, say the second site to open why fast?
The reason is that some time-consuming data is cached during the page loading process for the first time. The main cache includes DNS cache and page resource cache.
- Browser cache
When the server sends the HTTP response header to the browser for the first time, the browser uses the CacheControl field in the response header to set whether to cache resources. Cache-control: max-age =2000. If the Cache does not expire, the browser sends a request for the resource directly back to the Cache. If the cache expires, the browser continues to make network requests.
Blind spots:
Fourth Q: Enter the URL to the page display what happens?
To put it simply:
- Browser main process commit
url
To network process - The network process requests the server, returns the response header body, and determines whether a redirect is required
- The web process submits the page-type response resource to the renderer
- The render process has finished rendering and loading
Step by step to briefly say is:
- The first is domain name resolution
- Establishing a TCP Connection
- Setting up an Http request
- The server processes Http requests
- Disabling a TCP Connection
- Browser parsing resources
- Browser render page
The steps from entering the URL in the browser address bar to displaying the page github.com/webVueBlog/…
At the end of this article, I leave you with the interview question above: “What was sent between entering the URL and presenting the page?” If you were interviewed, how would you answer this question?
If you are the interviewer, what should you test?
The readings
Browser time.geekbang.org/column/intr working principle and practice…
conclusion
Thanks for reading this article. If you think this article is helpful to you, you are welcome to share it with more friends.
Like this article friends, welcome to pay attention to watch more wonderful content