Ali Ant Financial Spring Recruitment phone (2021 1/3/5 15:00)
First of all, I introduced myself, and then BEGAN to ask questions of basic knowledge (the following are questions and answers, the answers are from the Internet, please contact me to delete infringement).
1. The difference between threads and processes
-
Fundamental difference: Processes are the basic unit of resource allocation, while threads are the basic unit of CPU task scheduling and execution
-
Resource overhead: Each process has its own code and data space (program context), and switching between programs can be expensive; Threads can be regarded as lightweight processes. The same type of threads share code and data space, and each thread has its own independent running stack and program counter (PC). Switching between threads has little overhead.
-
Inclusion: If there are multiple threads in a process, the execution process is not one line, but multiple lines (threads). Threads are part of a process, so they are also called lightweight or lightweight processes.
-
Memory allocation: Threads of the same process share the address space and resources of the process, while the address space and resources of the process are independent of each other
-
Impact relationship: the crash of one process in protected mode has no impact on other processes, but the crash of one thread kills the entire process. So multi-processing is more robust than multi-threading.
-
Execution: Each independent process has entry points for program execution, sequential execution sequences, and program exits. However, threads cannot be executed independently and must be dependent on the application, which provides control over the execution of multiple threads, both of which can be executed concurrently
2. Interprocess communication
- The Shared memory
Shared memory allows two or more processes to access the same logical memory. This segment of memory can be mapped to its own address space by two or more processes. The information written by one process to the shared memory can be read by other processes using this shared memory through a simple memory, thus realizing inter-process communication **. If a process writes to shared memory, the changes will immediately affect any other processes that have access to the same segment of shared memory. Shared memory is the fastest IPC method, which is specifically designed for the low efficiency of other interprocess communication methods. It is often used in conjunction with other communication mechanisms, such as semaphore **, to achieve synchronization and communication between processes
- Messaging: Message queuing
The messaging model is implemented by exchanging messages between collaborating processes. Messaging is useful for exchanging a small amount of data because there is no need to avoid collisions
- A semaphore
A semaphore is a counter that can be used to control access to a shared resource by multiple processes. It is often used as a locking mechanism to prevent other processes from accessing a shared resource while one process is accessing it. Therefore, it is mainly used as a means of synchronization between processes and between different threads within the same process
- Pipe (pipe)
A shared file (pipe file, similar to a fifO queue, written by one process and read by another) that connects reading and writing processes to enable communication between them.
- An anonymous pipe is an unnamed, one-way pipe that transfers data between a parent process and a child process. Anonymous pipes can only communicate between two processes on the local machine, not across the network
- Named pipes can realize communication between two processes not only on the local machine, but also across the network
- The socket (socket)
Sockets are also an interprocess communication mechanism that, unlike other communication mechanisms, can be used for process communication between different machines
- signal
Signals are a complex form of communication used to notify a receiving process that an event has occurred.
3. Synchronize between threads
- Mutex: Using the mutex mechanism, only the thread that owns the mutex can access the common resource. Because there is only one mutex, it is guaranteed that common resources will not be accessed by multiple threads at the same time.
- Semaphore: It allows multiple threads to access the same resource at the same time, but you need to control the maximum number of threads accessing the resource at the same time.
- Event (signal) : through the notification operation of the way to keep multi-thread synchronization, but also convenient implementation of multi-thread priority comparison operation.
4. What are deadlocks and the four conditions that cause deadlocks
In two or more concurrent processes, if more than one process holds a resource requested by another process, and requests another process to own the resource, a deadlock will be formed. Colloquially, it is a state in which two or more processes block indefinitely and wait for each other. Four conditions for a deadlock to occur (no deadlock can occur if one condition is not met)
- Mutually exclusive condition: a resource can only be used by one process at a time
- Request and hold condition: when a process is blocked by requesting a resource, it holds on to the acquired resource
- Non-preemption condition: Resources acquired by a process cannot be preempted until they are fully used
- Circular wait condition: Several processes form a relationship of circular wait resources that end to end
5. Three handshakes and four waves
6. Why three handshakes and four waves
7. Sorting complexity and stability
8. Box model
- W3C standard box model
- IE box model
9. position
10. The flex layout
Elastic layouts are used to provide maximum flexibility for box models.The container has two axes by default: horizontalThe spindle(main axis) and verticalCross shaft(Cross axis). The starting position of the spindle (where it intersects with the border) is called main start and the ending position is called main end; The starting position of the intersecting axis is called cross start and the ending position is called cross end. By default, items are arranged along the main axis. The main axis space occupied by a single project is called main size, and the cross axis space occupied is called cross size.
Detailed links
1. Six attributes of the container:
- flex-direction
- Row (default) : The main axis is horizontal and the starting point is on the left.
- Row-reverse: The main axis is horizontal and the starting point is at the right end.
- Column: The main axis is in the vertical direction, and the starting point is in the upper edge.
- Column-reverse: the main axis is vertical and the starting point is at the bottom
- flex-wrap
- flex-flow
The flex-flow property is a short form of the flex-direction and flex-wrap properties. The default value is Row Nowrap.
- justify-content
Defines the alignment of items on the main axis
- align-items
The align-items property defines how items are aligned on the cross axis.
-
Flex-start: Alignment of the starting point of the cross axes.
-
Flex-end: alignment of ends of crossed axes.
-
Center: Alignment of the midpoint of the cross axis.
-
Baseline: The baseline alignment of the first line of text of the project.
-
Stretch (default) : If the height is not set or auto is set, the project will occupy the entire height of the container
-
align-content
The align-content property defines the alignment of multiple axes. If the project has only one axis, this property does not work.
2. Six attributes of the project
- order
- flex-grow
- flex-shrink
- flex-basis
- flex
- align-self
11. Center vertically
Introduction to the realization of vertical center
12. Prototype, prototype chain
In-depth understanding of the prototype prototype chain
13. Js inheritance
Detailed introduction
14. MVVM
- View layer: The View layer
In front-end development, this is usually the DOM layer, whose main function is to present various information to the user.
- Model layer: Data layer
The data may be our fixed dead data, more from our server, from the network request down the data.
- ViewModel layer: The ViewModel layer
The Viewmodel layer is the bridge between the View and Model. On the one hand, it implements Data Binding, that is, Data Binding, to reflect Model changes to the View in real time. On the other hand, it implements a DOM Listener, or DOM Listener, that listens when some DOM event (click, scroll, touch, etc.) occurs and changes the corresponding Data if necessary
15. Two-way data binding
- Vue adopts data hijacking combined with publish/subscribe mode. Through Object.defineProperty(), it hijacks the setter and getter of each attribute, publishes messages to subscribers when data changes, and triggers corresponding listening callback.
- Getter function to obtain the property value of an object.
- Setter functions that assign values to properties of an object.
- Publish a function that executes the corresponding callback when it publishes.
- Subscribing functions, adding subscribers, passing in functions to be executed when publishing, possibly with additional arguments.
- Data hijacking: Object.defineProperty hijacks setter and getter operations for Object properties and “plants” a listener to notify when data changes
16. The virtual DOM
- Is a layer of abstraction of the real DOM, based on JavaScript objects (VNode nodes) as a tree, using the attributes of the object to describe the node, and finally through a series of operations to map the tree to the real environment.
- In Javascript objects, the virtual DOM is represented as an Object Object. It contains at least three attributes: tag, attrs, and children. The names of these attributes may vary from framework to framework
- The virtual DOM was created to better render virtual nodes into the page view, so the nodes of the virtual DOM object correspond to the properties of the real DOM
- Through the transaction processing mechanism, the results of multiple DOM modifications are updated to the page at one time, thus effectively reducing the number of page rendering, reducing the number of DOM modification redrawing and rearrangement, and improving rendering performance.
- The DIff algorithm is encapsulated inside VUE or React to compare, modify the changed changes during rendering, and render the original data that did not change.
17. The process of entering the url from the browser to the interface display
- 1. Enter the url
- 2. DNS resolution
- 3. Establish the TCP connection
- 4. The client sends the HTPP request
- 5. The server processes the request
- 6. The server responds to the request
- 7. The browser displays HTML
- 8. Browsers send requests for other resources in HTML.
18. Browser cache
- HTTP cache:
- Strong cache: Reads resources directly from the cache without sending requests to the server.
- Negotiated cache: After the cache is forcibly invalidated, the browser sends a request to the server with the cache label. The server decides whether to use the cache.
- Local cache:
- LocalStorage: Set in the front end to reduce data requests for long-term storage
- SessionStorage: in the front end setting, only existing in the current session that is to reopen the browser data disappear
- Cookie: Backend setting, stored in a local file on the client, set by set-cookie and the contents of the cookie automatically passed to the server on request.
- IndexDB: Provides a local database, lookup interface, and index creation for the browser
Cookies and Session: Cookies are browser caches stored on users’ local terminals by websites for identification and session tracking. Session: Session control, where variables stored in the session object are not lost when the user jumps between Web pages until the user terminates the session.
-
Thing in common:
- Cookies and sessions have one thing in common: Cookies and sessions are used to track the identity of a browser user.
- The difference is that session is stored on the server, and expiration depends on the setting of the service period. Cookies are stored on the client, and the past can be set when cookies are generated.
-
The difference between
- Cookie data is stored on the client’s browser and session data is stored on the server
- Cookies are not very secure. Others can analyze cookies stored locally and cheat cookies. Session should be used if security is the main consideration
-
usage
- Sessions are stored on the server for a certain amount of time. As more and more accesses take a toll on your server’s performance, you should use cookies if your primary concern is to reduce server performance
- The limit for a single cookie on the client side is 3K
- Therefore: store important information such as login information as SESSION; Other information can be stored in cookies if needed
19. Webpack packaging optimization
- Reduce the size of packaged files
- According to the need to load
- Tree Shaking: It is possible to remove unreferenced code from a project
- Scope: Analyze the dependencies among modules, and combine the packaged modules into a function as much as possible.
- Speed up packing
- Optimize the Loader
Detailed introduction
20. XSS
21. node.js
We talked for about an hour