Ali internship offer experience – Treasure (part 1)
Series as follows: Think useful points like! ✨
- 2 for internship ali offer experience – prepare article | creator camp ii
- 2 for internship ali offer experience – the second phase of the surface by the article | creator camp
- 2. Get internship offer experience of Ali – Treasure Chapter (1)
- 2. Get internship offer experience of Ali – Treasure Chapter (ii)
The browser
process
state
Running state (the process is actually using CPU at this time). Ready (running, but temporarily stopped because another process is running). Blocked (the process cannot run unless some external event occurs).Copy the code
communication
-
Anonymous pipe
Logo and name as the name suggests, it has no anonymous pipe is a special file exists only in memory, did not exist in the file system, the vertical bar | 】 【 shell command is anonymous pipe, and communication of data flow is plain and limited size, is one-way communication way, the data can only flow in one direction, if you want to two-way communication, Two pipes need to be created, and anonymous pipes can only be used for parent-child communication between processes. The life cycle of anonymous pipes is established with the creation of the process and disappears with the termination of the process.Copy the code
-
A named pipe
This breaks the restriction that anonymous pipes can only communicate between related processes, because the premise of using named pipes is to create a device file of type P on the file system, through which unrelated processes can communicate. In addition, whether anonymous or named pipes, the data written by a process is cached in the kernel, and the data read by another process is naturally obtained from the kernel. At the same time, the communication data follows the first-in-first-out principle, and file location operations such as LSEEK are not supported.Copy the code
-
The message queue
Overcome the communication pipeline data is plain byte stream, the message queue is actually stored in the kernel "message list", the body of the message queue can be user-defined data types, send data, will be divided into a an independent body, when receiving data, of course, also want to and the sender sends the message body is consistent with the type of data, This ensures that the data read is correct. The speed of message queue communication is not the most timely, after all, every data write and read needs to go through the process of copying between user and kernel.Copy the code
-
The Shared memory
Can be solved in the message queue communication between user mode and kernel mode data copy process of overhead, it directly assigned a Shared space, each process can have direct access to, like your own space access process convenient, don't need in kernel mode or system calls, greatly improving the speed of communication, enjoy the fastest in the name of interprocess communication way. However, the convenient and efficient shared memory communication brings new problems. Multiple processes competing for the same shared resource will cause data confusion.Copy the code
-
A semaphore
Semaphores, then, are needed to secure the shared resource to ensure that only one process can access it at any one time, which is mutually exclusive access. The semaphore can not only achieve mutual exclusion of access, but also achieve synchronization between processes. The semaphore is actually a counter, which represents the number of resources, and its value can be controlled by two atomic operations, namely P operation and V operation.Copy the code
-
signal
A semaphore with a similar name is a signal, which has a similar name but not the same function at all. Signal is inter-process communication mechanism in the asynchronous communication mechanism, the signal can be direct interaction between the application process and the kernel, the kernel can also use signal to notify the user space, what had happened to the process of system events, the source of the signal events mainly include hardware source source (such as keyboard Cltr + C) and software (e.g., kill command), Once a signal occurs, a process can respond to it in three ways. Perform default operations. 2. Capture signals. 3. SIGKILL and SEGSTOP are two signals that the application process cannot detect and ignore, so that we can terminate or stop a process at any time.Copy the code
-
The Socket communication
The communication mechanism mentioned above, all work in the same host, if you want to communicate with different host processes, then you need Socket communication. Sockets are not only used for communication between different host processes, but also for communication between local host processes. According to different Socket types, sockets can be divided into three common communication modes: TCP, UDP, and local process communication.Copy the code
thread
Use reasons
1. Multiple activities occur simultaneously in many applications. Some of these activities get blocked over time. By splitting these applications into sequential threads that can run quasi-concurrently, the programming model becomes simpler. 2. Because threads are lighter than processes, they are easier (that is, faster) to create and easier to undo than processes. On many systems, creating a thread is 10 to 100 times faster than creating a process. 3. If multiple threads are CPU-intensive, there is no performance gain, but if there is a lot of computation and a lot of I/O processing, having multiple threads allows these activities to overlap with each other, thus speeding up application execution.Copy the code
communication
Threads under the same process are sharing process resources, as long as the shared variables can achieve inter-thread communication, such as global variables, so the focus is not on the way of communication between threads, but on the problem of multi-thread competition for shared resources, semaphore can also achieve mutual exclusion and synchronization between threadsCopy the code
Juejin. Cn/post / 686993…
-
The mutex
Only one thread can access a shared resource at any time.Copy the code
-
synchronous
It is guaranteed that thread A should execute before thread B;Copy the code
Event engine
Rendering engine
Event loop
JS engine
- Async and defer and the difference
Browser kernel
understand
It is mainly divided into two parts: rendering engine and JS engine.
- Rendering engine's job is to render, that is, to display the requested content in the browser window. By default, the rendering engine can display HTML, XML documents, and images, but it can also display other types of data with plug-ins (a browser extension), such as the PDF reader plug-in, which displays PDF format. - JS engine: parses and executes javascript to achieve dynamic effects of web pages.Copy the code
At the beginning, rendering engines and JS engines were not clearly distinguished, but later JS engines became more and more independent, and the kernel tended to refer only to rendering engines.
Common kernels and comparisons
- Trident: This type of browser kernel is used by Internet Explorer. In the early days, Internet Explorer had a large market share, so this type of kernel was popular. Many web pages were written according to this kernel standard, but in fact, this kernel does not support the real web standard very well. However, due to the high market share of IE, Microsoft did not update the Trident kernel for a long time, which led to the disconnection between the Trident kernel and the W3C standard. Moreover, security problems such as a large number of bugs in the Trident kernel have not been solved, and some experts and scholars have publicized their views that Internet Explorer is not secure, so many users start to switch to other browsers. - Gecko: This is the kernel used by Firefox and Flock. This kernel has the advantage of being powerful and rich, supporting many complex web effects and browser extensions, but at the cost of consuming many resources, such as memory. - Presto: Opera used to use the Presto kernel, which is considered the fastest kernel for web browsing due to its inherent advantages in development. It can handle scripting languages such as JS scripts three times faster than other kernels. The disadvantage is that some web compatibility is lost in order to achieve high speed. - its: Webkit is the kernel used by Safari, which has the advantage of faster web browsing than Presto but better than Gecko and Trident. The disadvantage of Webkit is that it is not very fault tolerant for web code, which means it is not compatible with web code. Some non-standard web pages cannot be displayed correctly. WebKit is the predecessor of KHTML engine of KDE group. It can be said that WebKit is an open source branch of KHTML. - Blink: In a Blog post on Chromium Blog, Google announced that it would split from Webkit, apple's open-source browser core, and build the Blink rendering engine (browser core) into Chrome in the Chromium project. In fact, Blink engine is a branch of Webkit, just like Webkit is a branch of KHTML. Blink Engine is now being developed jointly by Google and Opera Software. As mentioned above, Opera abandoned its Presto kernel and joined Google in developing Blink with Google.Copy the code
requestAnimationFrame
With setTimeout difference
Redraw a web page by frame
Browser architecture
* User interface * main process * Kernel * Rendering engine * JS engine * execution stack * event trigger thread * message queue * microtask * macro task * Network asynchronous thread * timer threadCopy the code
Cross domain
Why does it exist
-
Open browser, security needs to be limited
-
The same-origin policy
- Ajax homology policy - DOM Homology policyCopy the code
The solution
-
JSONP
-
Cros
- the server configuration - Access - Control - Allow - (Origin/Methods/Headers/Credentials/Max - Age) - simple - not a simple request - request in advanceCopy the code
-
iframe
The cache
Strong/negotiated cache
- Common values - pros and cons - ETag compositionCopy the code
Storage technology
cookie
localStorage
sessionStorage
indexDB
To render the page
Enter the URL to render the page
Bad phenomenon
-
FOUC (Style flicker)
This refers to the problem of style flickering, because browser rendering mechanisms (such as Firefox) render HTML before CSS loads, resulting in unstyled content being displayed and then styles suddenly appearing. The main reason for this is that the CSS takes too long to load or is placed at the bottom of the document.Copy the code
-
hang
Redraw and reflow
-
Reduce reflux
(1) Use transform instead of top. (2) Do not put node attribute values in a loop as variables in the loop. (3) Do not use table layout, even a small change may cause the entire table layout. For example, use the documentFragment object to manipulate the DOM in memory. (5) Do not modify the DOM style line by line. Instead, define the CSS class in advance and change the DOM className.Copy the code
network
– Lin da Ren’s cabin
7 tier / 5 tier IOS model
Ajax
handwritten
Status code
Uniform resource
URI
URI: Uniform Resource Identifier (URI)
URL
URL: Uniform Resource Location refers to the Uniform Resource locator
URN
URN: Universal Resource Name Specifies the Universal Resource Name
The difference between Post and Get
Post and Get are two methods of HTTP requests.
(1) From the perspective of application scenarios, GET request is an idempotent request. Generally, GET request is used for scenarios that have no impact on server resources, such as requesting a web page. Post, on the other hand, is not an idempotent request and is generally used in situations that affect server resources. Things like registering users. (2) Because of different application scenarios, browsers generally cache Get requests, but rarely CACHE Post requests. (3) In terms of the format of the sent message, the entity part of the Get request message is empty, while the entity part of the Post request message is generally the data sent to the server. (4) However, A Get request can also be sent to the server in a URL, which is less secure than a Post request, because the requested URL will be retained in the history. And browsers have a length limit on urls, which affects the length of data sent in GET requests. This restriction is browser-specific, not RFC specific. Also, the argument passing of POST supports more data types.Copy the code
The browser enters the URL and press Enter
(1) The URL will be analyzed first, and the transmission protocol and the path of the requested resource will be analyzed. If the protocol or host name in the ENTERED URL is invalid, the content entered in the address bar will be passed to the search engine. If there is no problem, the browser checks for illegal characters in the URL. If there are illegal characters, it escapes them before proceeding to the next process. (2) The browser will determine whether the requested resource is in the cache. If the requested resource is in the cache and not invalid, it will be used directly. Otherwise, it will send a new request to the DNS server. (3) Next, we first need to obtain the IP address of the domain name in the INPUT URL. First, we will determine whether there is a local cache of the IP address of the domain name. If there is, we will use it. - The local DNS server also checks whether cache exists. If no cache exists, the local DNS server sends a request to the root DNS server to obtain the address of the responsible TOP-LEVEL DNS server. - Request the top-level domain name server and obtain the address of the responsible authoritative domain name server. - Sends a request to the authoritative DNS server. After obtaining the IP address of the domain name, the local DNS server returns the IP address to the user. Requests sent by users to the local DNS server are recursive requests, and requests sent by the local DNS server to DNS servers at different levels are iterative requests. (4) After the browser gets the IP address, data transmission also needs to know the MAC address of the destination host, because the application layer sends data to the transport layer, and the TCP protocol specifies the source port number and destination port number, and then sends the data to the network layer. - The network layer uses the local ADDRESS as the source address and the obtained IP address as the destination address. - And then send it to the data link layer. The MAC addresses of both sides of the communication need to be added to the data link layer. The MAC address of our host is taken as the source MAC address, and the destination MAC address needs to be treated in different cases. We can determine if we are in the same subnet as the requesting host. - If you are on the same subnet, you can use the APR protocol to obtain the MAC address of the destination host. - If we are not on the same subnet, our request should be forwarded to the gateway for forwarding. In this case, the GATEWAY can also obtain the MAC address of the gateway through ARP. In this case, the MAC address of the destination host should be the gateway address. (6) If HTTPS is used, there is a TLS four-way handshake before communication. (7) When the page request is sent to the server, the server will return an HTML file as a response, the browser received the response, began to parse the HTML file, start the page rendering process. (8) The browser will first build a DOM tree based on the HTML file and a CSSOM tree based on the parsed CSS file. If script tags are encountered, it will determine whether defer or async properties are included. Otherwise script loading and execution would block the rendering of the page. Once the DOM and CSSOM trees are established, the render tree is built from them. Once the render tree is built, the layout is based on the render tree. After the layout is complete, the page is drawn using the browser'S UI interface. At this point the entire page appears. (9) The last step is the four-wave process of TCP disconnection.Copy the code
CDN
CDN is a content distribution network, through the cache of source website resources, the use of its own multiple servers located in different regions, different operators, to provide users with the nearest access function. In other words, the user's request is not directly sent to the source website, but to the CDN server, and the CND server will locate the request to the nearest server containing the resource to request. This helps speed up the web site and, in this way, reduces the load on the source server.Copy the code
advantage
1. In order to speed up website access (physical geographical advantages) 2. In order to achieve cross-operator and cross-regional full network coverage 3. In order to ensure the security of your website - CDN load balancing and distributed storage technology, can enhance the reliability of your website, quite virtually add an umbrella to your website, to deal with the vast majority of Internet attacks. Anti-attack systems can also prevent malicious attacks on websites. 4. For remote backup - In the event of an unexpected failure of a server, the system will call on other nearby healthy server nodes for service, providing close to 100% reliability, which allows your site to never go down. 5. To save costsCopy the code
The agent
Forward agent
The proxy is often referred to as the forward proxy, forward proxy process, which hides the real requesting client. The server does not know who the real client is, and the service requested by the client is requested by the proxy server instead.Copy the code
The reverse proxy
The reverse proxy hides the real server. When we request a website, there may be thousands of servers serving us, but we do not know which one it is, we do not need to know, we just need to know who the reverse proxy server is, the reverse proxy server will help us forward the request to the real server. Reverse proxies are commonly used for load balancing!Copy the code
Load balancing
It’s also an interview test point
im
Based on HTTP protocol
-
Short polling
- The basic idea of short polling is that the browser sends HTTP requests to the browser at regular intervals. After receiving the request, the server responds directly regardless of whether there is any data update. - In this way, instant communication is essentially a process in which the browser sends a request and the server receives a request. By enabling the client to continuously make requests, the client can simulate the change of data received from the server in real time. This approach has the advantage of being relatively simple and easy to understand. - The disadvantage is that this method wastes resources on both the server and the client due to the need to constantly establish HTTP connections. As the number of users increases, the pressure on the server side increases, which is quite unreasonable.Copy the code
-
Long polling
- The client sends a request to the server. After receiving a request from the client, the server suspends the request and checks whether the data on the server is updated. - Responds if there is an update, and returns only after a certain time limit is reached if there is no data at all. After processing the information returned from the server, the client-side JavaScript response handler makes another request to re-establish the connection. - Long polling has the advantage of significantly reducing the number of unnecessary HTTP requests and saving resources compared to short polling. The disadvantage of long polling is that hanging connections can also lead to wasted resources.Copy the code
-
SSE
- THE basic idea of SSE is that the server uses stream information to push information to the server. - Strictly speaking, the HTTP protocol does not allow the server to actively push information. A workaround, however, is for the server to declare to the client that the next message to be sent is a stream. That is, instead of sending a one-time packet, a stream of data is sent continuously. Instead of closing the connection, the client waits for a new data stream from the server, as in the case of video playback. - SSE uses this mechanism to push information to the browser using streaming information. It is based on THE HTTP protocol and is currently supported by all browsers except IE/Edge. Compared with the previous two methods, it does not need to establish too many HTTP requests, and saves resources in comparison.Copy the code
WebSocket
- WebSocket is a new protocol defined by Html5. Different from the traditional HTTP protocol, WebSocket allows the server to actively push information to the client. - WebSocket is a full-duplex protocol, that is, the communication parties are equal and can send messages to each other. SSE is one-way communication, and only the server can push information to the client. If the client needs to send information, it belongs to the next HTTP request. - The disadvantage of using the WebSocket protocol is complex configuration on the server side.Copy the code
Network security
【Web Security/OPERATING system/Performance optimization 】- Big cabin
XSS
-
The prevention of
-
type
- Storage - Reflection - DOMCopy the code
Csrf
-
token
- Storage mode - localStorage - cookie - Automatic sending, not cross-domain - HTTP Authorization attributeCopy the code
-
cookie
The transport layer
Transport layer protocol mainly provides logical communication between different processes on different hosts. The transport layer only works in the end system.Copy the code
UDP
UDP is a connectionless, unreliable transport layer protocol. It provides the bare minimum of functionality that the transport layer needs to implement and adds little to IP other than reuse/decomposition capabilities and a small amount of error detection. UDP is applicable to application scenarios that require high real-time performance.Copy the code
-
The characteristics of
- UDP is a connectionless transport layer protocol because the communication parties do not shake hands before sending packets. Because there is no handshake process, there is no latency for establishing a connection compared to TCP. Because there is no connection, there is no need to save the state of the connection in the end system. - UDP provides the best delivery service. That is, UDP does not guarantee reliable data delivery. - UDP Does not provide congestion control and traffic control mechanisms. Therefore, the rate of sending UDP packets is not limited. - Because a UDP socket is identified only by destination address and destination port, UDP can support one-to-one, one-to-many, many-to-one, and many-to-many interactive communication. - UDP header is small, only 8 bytes.Copy the code
-
Message segment structure
- THE UDP packet segment consists of the header and application data. - The header of the packet contains four fields: source port number, destination port number, length, and check and. Each field is two bytes long. - Length Indicates the length of the entire packet segment, including the header and the size of the application data. - Checksum is an error verification mechanism provided by UDP. Although the error verification mechanism is provided, UDP is helpless for error recovery.Copy the code
TCP
-
The characteristics of
- TCP is connection-oriented and requires a three-way handshake to establish a connection before communication. It needs to maintain the state information of the connection between the two parties in the end system. - TCP provides reliable data transmission services through the sequence number, confirmation number, scheduled retransmission, and verification mechanisms. - TCP provides point-to-point services, that is, it is a connection between a single sender and a single receiver. - TCP provides the full-duplex service, that is, the two parties can send and receive data to each other. - TCP provides the congestion control mechanism. When the network is congested, TCP controls the sending rate of data, reducing packet loss and reducing the degree of congestion on the network. - TCP provides a traffic control mechanism to ensure that the sending and receiving rates are the same. If the cache received by the receiver is very small, the sender slows down the sending rate to avoid packet loss due to cache filling.Copy the code
-
Message segment structure
-
A TCP packet segment consists of a header and data. The header is usually 20 bytes.
-
Header: source port, destination port, confirmation number, receive window, length field, flag field, checksum
1. The source and destination ports are used for packet segment multiplexing and decomposition. 2. 32 bit serial number and 32 bit confirmation number, with the realization of reliable data transport service. 3. Flow control is achieved by a 16-bit receive window field that represents the number of bytes the receiver is willing to receive. 4. 4-bit header length field, which indicates the length of the TCP header in 32-bit words. 5.6-bit flag field, the ACK field to indicate that the value of the confirmation sequence number is valid, and the RST, SYN, and FIN bits for connection establishment and disconnection. The PSH field indicates that the receiver should immediately hand over the data to the upper layer, and the URG field indicates that urgent data exists in the packet segment. 6. Checksums provide error detection of data.Copy the code
-
Three-way handshake
Semaphore: ACK: acknowledgement flag - When ACK = 1, the acknowledgement number field is valid, when ACK = 0, the acknowledgement number is invalid. Ack: indicates the address of the data to be sent next. SYN: indicates the synchronization flag - used to synchronize the sequence number when a connection is established. When SYN=1 and ACK=0, this is a connection request packet segment. If the peer agrees to establish a connection, ACK is set to 1, indicating that this is a packet accepting the connection. Seq: the value is the address of the data sent (there are many articles say it is a random number, should be the address of the data.) First handshake: the client wakes up the server and sends: SYN=1, seq=x to request synchronization, and the data address is X. SYN (requests a request), SEQ (tells the server-client the address of the data x) Second handshake: When the server receives the data, it returns: SYN=1, ACK=1, SEq =y, ACK= x+1 (merge step) ACK (reply, ACK reply value is the client data address X (SEq) +1, ensure that it is the reply and send successfully), SEq (tell the client-server data address Y) third handshake: the client replies: ACK=1, SEq =x+1, ACK= y+1, indicating that the sent message has been received, the data address of this time is X +1, reminding the server to send the data ACK of THE address of Y +1 next time (also responded, the ACK value is the data address of the server Y (SEq) +1). Seq (data address of client - next time to send here x+1) to sum up: three main handshake semaphore: 1. ACK (reply flag 1, 0) - 2. ACK (reply value - next time to send this place) 2.Copy the code
-
Four times to wave
Semaphore: ACK: reply flag, 1: valid FIN: end request flag, 1: valid (distinct from SYN!) Seq: address of the data to be sent. First wave: The client initiates a disconnection request and sends the following message to the server: FIN=1, SEq = X. FIN (Valid end request flag), SEQ (address of data sent this time: X) Second wave: After receiving data, the server sends ACK=1, SEq = Y, ACK= X +1 to the client. ACK (reply flag valid), seq (address of current server value), ACK (reply value is client address x+1) note that there is no return FIN end request flag valid! Third wave: The client waits, and the server replies with FIN=1,ACK=1, SEq =z, ACK= x+1, indicating that the connection can be disconnected. Notice that the seq changes to Z because the address has changed due to intermediate data transmission. FIN (end request flag valid), ACK (reply flag valid), SEQ (new address of the current server value (it will change because it is other data!) ), ack (reply value is the client address x+1) fourth wave: the client finally confirms closing the connection reply: ACK=1,seq=x+1, ACK= z+1 ACK (reply flag valid), SEq (current client address x+1), ACK (reply value is server address y+1) The client finally responded and waited for 2MSL "Maximum lifetime of two packet data files". Because of the timeout retransmission optimization algorithm, and the round-trip time occupies a life cycle, if the packet data file does not receive error feedback from the server during the round-trip process, the data is considered to have been received successfully. Otherwise, the packet is retransmitted to ensure the reliability of the connection.Copy the code
-
Guaranteed reliability
-
The serial number
-
Confirmation no.
-
The checksum
-
ARQ protocol
ARQ protocol refers to the automatic retransmission request, which ensures reliable data delivery through timeout and retransmission. It is a very important mechanism of TCP protocol to achieve reliable data transmission.Copy the code
-
Stop waiting for ARQ
- Stop waiting The basic principle of ARQ is that the sender sets a timer for each packet sent by the sender. When the confirmation of the sending packet is returned, the timer is cleared and the next packet is sent. If no positive response from the sent group is received within the specified time, the previous group is resend. - For the receiving side, each time it receives a packet, it returns an affirmative response to the packet. When it receives a redundant packet, it directly discards it and returns an acknowledgement of the redundant packet. When receiving packet damage, discard it directly. - The disadvantage of using the stop-wait ARQ protocol is that each packet must wait until the packet is confirmed before the next packet can be sent, resulting in low channel utilization.Copy the code
-
Continuous ARQ protocol
- Continuous ARQ is used to solve the problem that the channel utilization rate of stop-wait ARQ is too low. It sends a group of packets continuously, and then waits for the confirmation of the reply to the packet. For how to deal with the possible error recovery situation in the packet, generally can use the sliding window protocol and select retransmission protocol to achieve. - Sliding window protocol - The sender maintains a sending window. The former groups of the sending window are the groups that have been sent and confirmed. The sending window contains the groups that have been sent but not confirmed and the groups that are allowed to be sent but not yet sent. When the sender sends a packet to the receiver, all packets in the window are sent in sequence and a timer is set. This timer can be interpreted as the earliest packet that has not been acknowledged. If you receive a confirmation from a group within the timer time, slide the window and move the head of the window to a position after the confirmation group. At this time, if there are still sent but not confirmed groups, reset the timer, if there is no timer off. If the timer times out, resend all packets that have been sent but have not received confirmation. - The receiver uses a cumulative acknowledgement mechanism. For all groups that arrive in sequence, the receiver returns a group affirmative. If an out-of-order packet is received, the receiver simply discards it and returns a yes for the most recent packet that arrived in order. Using cumulative acknowledgements ensures that the groups before the confirmation number have arrived in sequence, so the send window can be moved to the end of the confirmed groups. - The disadvantage of the sliding window protocol is that it uses the cumulative confirmation mechanism. If only the first group in the window is lost and the subsequent groups arrive in sequence, the sliding window protocol will resend all the groups, resulting in a large number of unnecessary groups discarded and retransmitted. - Select the retransmission protocol: Because the sliding window uses the cumulative confirmation mode, there will be a lot of unnecessary group retransmission. Using the selective retransmission protocol can solve this problem. - Select retransmission protocol maintains a send window at the sender. Before the sending window is the sent and confirmed groups, the window contains the sent but not confirmed groups, confirmed groups out of order, and allowed to send but not yet sent groups, after the sending window is the cache is not allowed to send groups. - The biggest difference between the select retransmission protocol and the sliding window protocol is that the sender creates a timer for each packet when sending the packet. After receiving an acknowledgement from a group, the sender cancels the timer of the group and determines whether there are continuous acknowledgement groups headed by the window header after receiving the group. If so, the sender moves the window backward. If not, the group is identified as the received out-of-order group. When a packet timer expires, the packet is retransmitted. - On the receiving side, it will confirm every correctly received group, no matter whether the group is in order or out of order. The out-of-order group will be cached until all the out-of-order groups arrive to form an ordered sequence, and then it will be delivered to the upper layer. If a packet cannot be received correctly, the receiver ignores the packet directly.Copy the code
-
-
Blocking optimization algorithm
- Slow hot start algorithm - Congestion avoidance algorithm - Timeout retransmission algorithm - Fast recovery algorithmCopy the code
-
The network layer
The network layer protocol mainly realizes the logical communication between different hosts.
IP Internet protocol
The IP Internet Protocol (IP) defines the addressing and forwarding methods at the network layer. For example, each host connected to the network is assigned an IP address. Common ones such as IPV4 use 32 bits to assign an address, and IPv6 uses 128 bits to assign an address.Copy the code
Routing protocol
Routing protocols, such as distance vector routing algorithms, determine the path through which a datagram flows from source to destination.Copy the code
The application layer
http
-
HTTP is a hypertext transfer protocol that defines the format and mode for exchanging packets between clients and servers. Port 80 is used by default. It uses TCP as the transport layer protocol to ensure the reliability of data transmission.
-
HTTP is a stateless protocol and the HTTP server does not store any information about the client.
-
HTTP has two connection modes, a non-continuous connection and a persistent connection.
- Non-continuous connection means that the server must establish and maintain a new connection for each requested object. - In continuous connection mode, the TCP connection is not closed by default and can be reused by multiple requests. - The advantage of using persistent connections is that you can avoid the time spent establishing a three-way handshake for each TCP connection. - non-persistent Connection used before HTTP1.0, but you can request the server not to close the TCP Connection by adding Connection: keep-alive to the request. - Persistent connections are used by default after HTTP1.1. Currently, most browsers support up to six persistent connections to the same domain.Copy the code
-
The HTTP message
-
The request message
- The first line is the request line: method + network protocol + version - the second line is the first line: user information: browser kernel + version + system - after the request head there is a blank line, this blank line can not be omitted, it is used to separate the head and entity. - The third line is the entity body. GET/HTTP/1.1 User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) Accept: */*``` ```Copy the code
-
The response message
- The first line is called the status line: protocol version field + status code + status information - the first line after that - and finally the entity body. HTTP/1.0 200 OK Content-Type: text/plain Content-Length: 137582 Expires: Thu, 05 Dec 1997 16:00:00 GMT Last-Modified: Wed, 5 August 1996 15:55:28 GMT Server: Apache 0.84 < HTML > <body>Hello World</body> </ HTML > 'Copy the code
-
Headers can be divided into four types: request header, response header, general header and entity header.
- The common header and entity header can be set in both request and response packets. The difference is the request header and response header. - Common request headers include the type of media resources accepted by Accept, character set accepted by accept-charset, and Host name. - The common response header contains the matching information of the ETag resource, and Location indicates the URI for the client to redirect. - Cache-control controls Cache policies and Connection manages persistent connections. - Common entity headers include content-Length the size of the entity body, Expires the expiration time of the entity body, and the Last modification time of the last-Modified resource.Copy the code
-
-
disadvantages
- HTTP/1.1 uses persistent connections by default. Multiple requests can reuse the same TCP connection, but the communication sequence of data requests in the same TCP connection is fixed. The server will process the next request only after it has processed the response of one request. If the response of the previous request is very slow, many requests will queue up, which is called "queue head congestion". Queue head blocking causes the maximum number of persistent connections to be reached and the remaining resources to wait for other resource requests to complete. - To avoid this problem, either reduce the number of requests or open multiple persistent connections at the same time. That's why we used Sprite graphics and incorporated scripts to optimize the site.Copy the code
http2
In 2009, Google unveiled its own SPDY protocol to address the inefficiencies of HTTP/1.1. Once this protocol proved viable in Chrome, it was used as the basis for HTTP/2, where major features were inherited. In 2015, HTTP/2 was released.
-
New features
- HTTP/2 is a binary protocol. In HTTP/1.1, the header must be text (ASCII encoding), and the data body can be either text or binary. HTTP/2 is a complete binary protocol, header information and data body are binary, and collectively called "frame", can be divided into header information frame and data frame. The concept of frame is the basis of its multiplexing. - HTTP/2 is multiplexed. HTTP/2 still multiplexes TCP connections, but within a connection, both the client and the server can send multiple requests or responses at the same time, and do not send them in sequence, thus avoiding the "queue congestion" problem. HTTP/2 uses the concept of data flow, because HTTP/2 packets are sent out of order. Consecutive packets in the same connection may belong to different requests. Therefore, the packet must be marked to indicate which request it belongs to. HTTP/2 refers to all packets of data for each request or response as a data stream. Each data stream has a unique number. The data flow ID must be marked when the data packet is sent to distinguish which data flow it belongs to. - HTTP/2 is optimized to compress the header information using Gzip or COMPRESS before sending duplicate fields. - Both the client and the server maintain a header table. All fields are stored in this table and an index number is generated. In the future, the same fields are not sent. - HTTP/2 allows the server to send unsolicited resources to the client. This is called server push. With server push, push the necessary resources to the client in advance, thus reducing the latency. It is important to note that the server actively pushes static resources under HTTP2, as opposed to websockets and using SSE to send real-time data to clients.Copy the code
-
disadvantages
- Because HTTP/2 uses multiplexing, generally only one TCP connection is required under the same domain name. Because multiple data flows use the same TCP connection, they comply with the same traffic state control and congestion control. As soon as one data stream becomes congested, the rest of the data stream cannot be sent out, resulting in all subsequent data being blocked. This problem with HTTP/2 is due to its use of TCP, and has little to do with its implementation.Copy the code
http3
https
-
The TLS handshake
1. In the first step, the client sends a request to the server. The request contains the protocol version number, a random number, and the encryption method supported by the client. 2. In the second step, after receiving the request, the server confirms the encryption method used by both parties, provides the certificate of the server and a random number generated by the server. 3. After confirming that the server certificate is valid, the client generates a new random number, encrypts the number using the public key in the digital certificate, and sends the random number to the server. It also provides a hash value for all previous content that the server can verify. 4. Step 4, the server uses its own private key to decrypt the random number sent by the client. And provide the hash value of all the previous content for the client to verify. 5. In the fifth step, the client and the server use the first three random numbers according to the agreed encryption method to generate a dialogue secret key, which will be used in the subsequent dialogue process to encrypt information.Copy the code
-
encryption
-
Convert HTTP to HTTPS
-axios configuration -baseurl specifies HTTPS urls -meta tags <meta http-equiv =" content-security-policy "Content ="upgrade-insecure requests">Copy the code
-
You can use the session ID or session ticket to restore the disconnected SSL connection.
- In the form of session ID, each session has a number. When the conversation is interrupted and the next time the connection is reconnected, as long as the client gives this number and the server has the record of this number, the two parties can continue to use the previous key without generating a new one. All browsers currently support this approach. However, the disadvantage of this approach is that the session ID can only exist on one server, and if our request is load balanced to another server, the session cannot be resumed.
- Session ticket is sent by the server to the client in the last session. The session ticket is encrypted and can only be decrypted by the server. The ticket contains the session information, such as the key and encryption method. In this way, regardless of whether our request is transferred to another server, when the server decrypts the ticket, it will be able to retrieve the information of the last conversation without regenerating the conversation secret key.
DNS protocol
The DNS protocol provides a host name to IP address translation service, which is often referred to as the domain name system. It is a distributed database composed of hierarchical DNS servers and an application-layer protocol that defines how hosts query this distributed database. DNS runs on top of UDP and uses port 53.Copy the code
-
hierarchy
Host name. secondary domain name. Top-level domain name Root domain nameCopy the code
-
The query
- The DNS query process is as follows: The DNS request is sent to the local DNS server, and the local DNS server performs the request. - Query the NS record and A record (IP address) of the top-level DNS server from the root DNS server. - Search NS record and A record (IP address) of secondary DNS server from top-level DNS server. - Find the IP address of "host name" from "secondary DNS". For example, if we want to query the IP address of www.baidu.com. - We first send the request to the local DNS server. The local DNS server determines whether there is a cache for the domain name. - If not, send a request to the root DNS server, which returns a list of IP addresses of the top-level DNS servers responsible for.com. - The local DNS server then sends a request to one of the TLNS responsible for.com. The TLNS responsible for.com returns the IP address list of the authoritative DNS servers responsible for.baidu. - The local DNS server then sends a request to one of the authoritative DNS servers. Finally, the authoritative DNS server returns an IP address list of the corresponding host name.Copy the code
-
way
- Recursive query After a query request is sent, the DNS server sends the request to the lower-level DNS server and returns the final query result to the user. With recursive queries, the user only needs to issue a query request once. The way we normally send a request to the local DNS server is a recursive query, because we only need to make a request once, and then the local DNS server returns us the final result of the request. - Iterative query Iterative query indicates that the DNS server returns the result of a single query after a user requests a query. The next level of query is requested by the user. Using iterative query, users need to send multiple query requests. The local DNS server sends multiple query requests to other DNS servers, because each DNS server returns only one query result, and the next-level query is performed by the local DNS server. The local DNS requests data from other domain names to obtain the final result. The process is iteration.Copy the code
-
Load balancing
DNS can be used for load balancing on redundant servers. Because today's large web sites typically use multiple servers, a domain name can have multiple server addresses. When a user initiates a DNS request for the domain name of a website, the DNS server returns a set of server IP addresses corresponding to the domain name, but in each response, the order of these IP addresses is recycled, and the user generally chooses the first address to send the request. In this way, users' requests are evenly distributed to different servers to achieve load balancing.Copy the code
-
Why UDP
- THE DNS uses UDP as the transport layer protocol to avoid the connection delay caused by TCP. Because in order to get a domain name IP address query tend to multiple domain name server, if using the TCP protocol, so every request connection time delay, so that made the DNS service is slow, because most address query request, is the browser of the requested page, this will cause web waiting for a long time. - There is a problem with using UDP as DNS protocol. Due to historical reasons, the minimum MTU of the physical link is 576. Therefore, to limit the packet length to 576, the length of UDP packet segment is limited to 512 bytes. Beyond 512 bytes, the UDP-based DNS protocol is truncated to 512 bytes, and the user may get an incomplete DNS response. - Once the length of DNS packets exceeds the limit, they are not divided into multiple packet segments like TCP. UDP does not maintain connection status, so you cannot determine which packet segments belong to the same data. UDP intercepts redundant data. To solve this problem, we can use THE TCP protocol to request packets. - Another problem with DNS is security. There is no way to be sure that the reply we get is a secure reply, because the reply can be forged by others. So DNS over HTTPS is used to solve this problem.Copy the code
Data structures and algorithms
The data structure
Arrays and matrices
The linear table
- List - queue - stackCopy the code
The tree
-
concept
- Degree of nodes - Degree of tree - leaf nodes - Branch nodes - internal nodes - parent nodes - child nodes - sibling nodes - hierarchyCopy the code
-
Binary tree
- Traversal - Pre-order traversal - Middle-order traversal - Post-order traversal - Order traversal - Type - Full binary tree - Balanced binary tree - Lookup binary tree - Huffman tree (optimal binary tree) - clue binary treeCopy the code
figure
- Storage - adjacency matrix - adjacency list - traversal - Depth first - Breadth first - Minimum spanning tree - Prim algorithm - Kruskal algorithmCopy the code
Hash table
Generalized table
basis
features
- Finitely - deterministic - Input value >= 0 - Output >= 1 - validityCopy the code
The complexity of the
- Time complexity - Space complexityCopy the code
The sorting
Sorting algorithm
-
Insertion sort
-
Insertion sort
Time complexity Spatial complexity On average O(n^2) O(1) The worst O(n^2) O(1) -
Hill sorting
Time complexity Spatial complexity On average O (n ^ 1.3) O(1) The worst O(n^2) O(1)
-
-
Selection class sort
-
Simple selection sort
Time complexity Spatial complexity On average O(n^2) O(1) The worst O(n^2) O(1) -
Heap sort
Time complexity Spatial complexity On average O(nlog2n) O(1) The worst O(nlog2n) O(1) - Big pile top
- Small cap pile
-
-
Exchange sort
-
Bubble sort
Time complexity Spatial complexity On average O(n^2) O(1) The worst O(n^2) O(1) -
Quick sort
Time complexity Spatial complexity On average O(nlog2n) (log2n) The worst O(n^2) (log2n)
-
-
Other sort
-
Merge sort
Time complexity Spatial complexity On average O(nlog2n) O(n) The worst O(nlog2n) O(n) -
Radix sort
Time complexity Spatial complexity On average O(n) O(n) The worst O(n) O(n) -
Bucket sort
Time complexity Spatial complexity On average O(n) O(n) The worst O(n) O(n)
-
stable
- Bubble sort - Insert sort - Merge sort - radix sortCopy the code
platform
leetcode
Cattle guest
Vue
The life cycle
10 life cycles (including Keep-alive)
Computed is different from Watch
Parent component lifecycle hook functions
- Loading rendering process - Parent/child component update process - Destruction processCopy the code
Component communication mode
Vuex
Vuex is a state management mode developed specifically for vue.js applications. At the heart of every Vuex application is the Store. A "store" is basically a container that contains most of the states in your app.Copy the code
The module
The effect of the Action
Differences from global VUE
Vue-router
Routing patterns
Hash - use URL hash value for routing - history-HTML5 history API and server configuration - abstract - Found no API available above, Routes are automatically forced into this mode.Copy the code
Vue3
Monitoring mechanism
- Proxy/Object. DefineProperty - inert observed in 2 x version, no matter how much data, will be at the outset to create observer. When the data is large, this can cause significant performance pressure when the page loads. The 3.x version will only create observers for "data that was used to render the initial visible portion", and 3.x observers are more efficient. - Immutable Observable - more precise change notification ratio: in version 2, when a vue. set is used to add a property to an object, all the watcher of the object runs again; In version 3. X, only watcher that depends on that property will run again.Copy the code
Composition API ()
Enhance typescript support
Performance improvement
- Smaller, faster - tree shaker optimization - Fragments and cross-component rendering - custom rendererCopy the code
A template engine
The interpolation
instruction
The filter
SPA/SSR
To optimize the
The code level
- V-if and V-show distinguish between use scenarios - Computed and Watch Distinguish between use scenarios - V-for traversal You must add keys to items. And avoid using v-if - long list performance optimization - event destruction - image resource lazy loading - route lazy loading - third-party plug-in on demand - optimization of infinite list performance - server rendering SSR or pre-renderingCopy the code
Engineering level
- Webpack compress images - reduce redundant code from ES6 to ES5 - Extract common code - template precompile - extract CSS from components - optimize SourceMap - Build result output analysis - optimize compilation of Vue projectsCopy the code
Basic Web Technologies
- Enable Gzip compression - Browser caching - Use CDN - Use Chrome Performance to find Performance bottlenecksCopy the code
PAW- Progressive Web application
- Advantages - Can generate small desktop ICONS without opening a browser, facilitating user access - Improve page access speed through network cache, achieving progressive page access or even offline access, improving user experience - Achieve app like push function. Generate system Notifications to Push to the user - composition - App Manifest (App) - Service Worker (emphasis) - Notifications API - Push APICopy the code
The principle of
v-model
MVVM
Two-way data binding
-
Listener: Observer uses Object.defineProperty() in the Vue 2.0 source code to hijack setters/getters for individual data attributes.
-
Object.defineProperty
Enumerable: disables any additional information that can be used without any additional control, and controls any additional information that can be used without any additional control. The additional information is different, and controls any additional information that can be used without additional control. Writable: value can be assigned a value (false) - Access descriptor 5. Get: property getter function, if there is no getter, undefined. This function is called when the property is accessed. Set: the setter function for the property, or undefined if there is no setter. This function is called when the property value is modified.Copy the code
-
-
Parser: Compile
Parse template instructions, replace vUE template engine data, initialize view; Bind the node corresponding to the template instruction to the corresponding update function, initialize the corresponding subscriber Watcher;Copy the code
-
Subscriber: Watcher
Add yourself to the subscriber DEP through the getter to notify the listener Observer of changes through the DEP, which in turn tells the parser Compile to update the view. Implement the binding of Observer and CompileCopy the code
-
Subscriber: Dep
The subscriber Dep in one component is responsible for collecting the subscriber Watcher and notifying the subscriber's update function when the data changes.Copy the code
Virtual DOM
- Advantages and disadvantages - Implementation principle - virtual abstract DOM tree - DIff algorithm - PatchCopy the code
Unidirectional data flow
Reference data type assignment
vm.$set
Why is data a function
Render to update the entire process
To sum up, there are too many good articles about detailed Vue in “nuggets”, I am lazy. Recently also has been drawing Vue source mind map, refueling refueling!
Strategic shorting…
engineering
Webpack
Principle and Function
The build process
1. Webpack will pack from the entry file 2. Record dependencies to form dependency tree 3. Importing dependent resources according to the dependency tree 4. Forming a chunk(code block) 5. All processing (compilation, etc.) == package 6. Export processed resource files == bundle 7. Collectively known as the static module packerCopy the code
configuration
-
loader
Styles: style-loader, CSS-loader, less-loader, sas-loader, etc. Files: raw-loader, file-loader, url-loader, etc. Babel-loader, coffee-loader, TS-loader and other verification tests: Mocha-loader, jshint-loader, ESlint-loader and so onCopy the code
-
plugin
UglifyJsPlugin: compress and obfuscate code CommonsChunkPlugin: improve packaging efficiency by packaging third-party libraries and business code separately HTML code can be generated automatically from templates, and CSS and JS files DefinePlugin can be automatically referenced: global variables are configured at compile time, which is useful for development and publish mode builds to allow different behavior. DllPlugin: Build of third-party packages that only build business code. DllReferencePlugin: Reference the manifest.json file generated by the DllPlugin configuration, which contains the mapping between dependent modules and module ids. 1. Write a JavaScript naming function. 2. Define an Apply method on its prototype. 3. Specify the webPack event hooks to mount. 4. Handle specific data for webpack internal instances. 5. Invoke the webPack callback when the functionality is complete. Before writing a plug-in, you need to understand the compiler and compilation objects, as well as the various stages of the WebPack life cycle and hooks. Plugin is more powerful than Loader. Intercepting webpack execution via hooks.Copy the code
To optimize the
-
Pack to build
- HMR: When a module changes, only this module is repackaged, greatly improving the build speed. - Babel cache: enable cacheDirectory in the corresponding loader: Configure worker-DLL for certain libraries (third-party libraries: jquery, React, vue...). To package separately, build the webpack.dlL. js file.Copy the code
-
Optimize the code
- source-map: a technique that provides source code to post-build code mapping (through which source code errors can be traced) - Cache: hash: granularity of entire project Chunkhash: granularity of entry per file contenthash: Granularity the content of each file - tree shaking: Removing garbage code, reducing the size of code - code split, extracting common code - PWA: Progressive Web development applications (accessible offline)Copy the code
Vite
Vite will start a server for you locally, and when the browser reads the HTML file, it will send a request for the main. vue module to the server only when it executes an import. Vite uses a number of internal dark arts, including vue template parsing. The code is compiled and so on, parsed into js files that the browser can execute and returned to the browser side. This ensures that the browser only requests and parses the module when it is actually used, maximizing load on demand.Copy the code
The principle of
- Browser native ES Module support
<div id="app"></div>
<script type="module">
import { createApp } from 'vue'
import Main from './Main.vue'
createApp(Main).mount('#app')
</script>
Copy the code
Dependent precompilation
-
Esbuild
As you can see, this step is the same build Webpack does, only dozens of times faster. - The Go language Esbuild is written using Go and is 10-100 times faster than pre-built packager dependencies written in JavaScript.Copy the code
-
Analyzing the CommonJS
Plug-in mechanism
-
Compatible with a Rollup
In general, as long as a Rollup plug-ins conform to the following standards, so it should be just as a Vite plug-in: if a Rollup plug-in is significant in the construction phase, only is in the build. RollupOptions. Plugins can be specified.
Soft skills
ask
1. My evaluation and improvement 2. How to take advantage of my project experience and team capabilities 3. 4. What are the specific requirements and responsibilities of this position? What abilities and qualities should be possessed? 5. Technology selection 6. Daily Life 7. How do I build my front-end architecture treeCopy the code
project
- Challenges - Technical challenges - Back end - Testing Mocha JestCopy the code
life
1. Future technology planning 2. What technical books to read 3. The most impressive experience in college 5. The greatest contribution and harvest 6. Personal reflections 7. Can you do front-end as a passionCopy the code
Mind mapping
If you just want a mind map, just come here and get it
-
Invincible HTML
-
Handsome CSS
-
Abba JS
-
Big Brother browser
-
Diplomat network
-
Head deflection data structures and algorithms
-
Nanny engineering
-
Soft Loneliness skills
The last
Because it is still a junior, there is a place that can be improved to add! Strong!!! Strong strong! Welcome to add comments, I will be careful to maintain the update! Finally feel useful click a thumbs-up! Hope you can get the offer as soon as possible! At the end of the day, is there some kind of networking group for big guys? Pull me up!Copy the code
The thumb up
The series is as follows :(first nuggets, no more, cuties remember to like!
- 2 for internship ali offer experience – prepare article | creator camp ii
- 2 for internship ali offer experience – the second phase of the surface by the article | creator camp
- 2. Get internship offer experience of Ali – Treasure Chapter (1)
- 2. Get internship offer experience of Ali – Treasure Chapter (ii)