Hello, everyone, this is a full set of Java review process, this is the second phase of the operating system, Jiashin goddess town building hey hey hey…

1. The difference between processes and threads

** process: ** is the smallest unit of resource allocation, is the execution process of the program, a process can have multiple threads, multiple threads share process heap and method area resources, but each thread has its own local method stack, virtual machine stack, program counter

** thread: ** is the smallest unit of task scheduling and execution. Threads may affect each other, and the execution cost is small, which is not conducive to resource management and protection. Threads share resources in the process

2. Coroutines?

A thread can have multiple coroutines, just as a process can have multiple threads.

3. IPC for inter-process communication

Anonymous pipe:

Anonymous pipes are half-duplex, and data can only be one-way communication. When two parties need to communicate, two pipelines need to be established; Can only be used between parent and child processes or between sibling processes (processes that are related).

Named pipe FIFO:

It differs from anonymous pipes in that it provides a pathname associated with it, which exists in the file system as a FIFO file. In this way, even processes that are not related to the creator of the FIFO can communicate with each other through the FIFO (between processes that have access to the path and the creator of the FIFO) as long as they have access to the path, and therefore, data can be exchanged between processes that are not related through the FIFO. It is worth noting that FIFO follows a strict first in first out approach, with reads to pipes and FIFOs always returning data at the beginning and writes to them adding data to the end.

Signal:

Signal is a kind of complicated communication mode. The conditions for signal generation are as follows: key pressing, hardware exception, process calling kill function to send signal to another process, user calling kill command to send signal to other processes. The message transmitted by signal is relatively small, and it is mainly used to inform the receiving process that a certain time has happened.

Message queue: Message queue is a linked list of messages, stored in the kernel and identified by message queue identifier, message queue overcomes the characteristics of little signal transmission information, pipe can only carry the unformatted byte stream and the size of the buffer is limited. A message queue acts as a mailbox, hanging there when it arrives, to be retrieved when needed. Message queues provide a simple and efficient way to transfer data between two unrelated processes. Compared to named pipes: Message queues have the advantage of being independent of the sending and receiving processes, which eliminates some of the difficulties that can arise when the named pipes are opened and closed synchronously. Message queues provide a way to send a block of data from one process to another. Furthermore, each block is considered to have a type, and the receiving process can independently receive blocks containing values of different types.

Advantages:

A. We can almost completely avoid the synchronization and blocking problems of named pipes by sending messages.

B. There are ways to check emergency messages in advance.

Disadvantages:

A. As with pipes, each data block has A limit on its maximum length.

B. There is also an upper limit on the total length of all data blocks contained in all queues in the system.

Shared memory:

Enables multiple processes to read and write directly to the same memory space, and is the fastest form of IPC available. It is designed for the low efficiency of other communication mechanisms. To exchange information between multiple processes, the kernel sets aside a memory area that can be mapped to its own private address space by the process that needs to access it. Processes can read and write directly to the memory without copying data, thus greatly improving efficiency. Because multiple processes share a memory segment, some synchronization mechanism (such as semaphore) is required to achieve synchronization and mutual exclusion between processes. Semaphores:

A semaphore is a counter used by multiple processes to access shared data. The purpose of the semaphore is interprocess synchronization. This type of communication is mainly used to solve problems related to synchronization and avoid race conditions. White whoring Data Sockets:

This method is mainly used to communicate between the client and server over the network. Socket is the basic operation unit supporting TCP/IP network communication, can be seen as the endpoint of two-way communication between the process of different hosts, simply speaking, is a convention of the two sides of communication, with the socket in the related function to complete the communication process.

4. User mode and core mentality

In the computer system, divided into two programs: system program and application program, in order to ensure that the system program is not intentionally or unintentionally destroyed by the application program, set up two states for the computer – user state, core state

** User mode: ** only has limited access to memory to run all applications

** Core mentality: ** Runs operating system programs, and the CPU has access to all data in memory, including peripherals

Why should there be user mode and kernel mode: white whoring data due to the need to limit the access ability between different programs, prevent them from obtaining other programs’ memory data, or obtain peripheral device data, and send to the network

There are three ways to switch from user mode to kernel mode:

A. System call

This is a way for a user-mode process to actively request a switch to kernel mode. The user-mode process uses a system call to request a service provided by the operating system to complete its work. For example, in the previous example, fork() actually executed a system call to create a new process. The core of the system call mechanism is to use an interrupt specially opened by the operating system for users to achieve, such as Int 80H interrupt of Linux.

B. abnormal

When the CPU is executing a program running in user mode, some unexpected exception occurs, which triggers a switch from the current running process to the kernel-related program handling the exception, and then goes to the kernel state, such as a page missing exception.

C. Interruption of peripheral devices

After the peripheral equipment to complete the operation of the user request, will send a corresponding to the CPU interrupt signal, the CPU will be suspended for the next article is going to execute commands to perform and interrupt signals corresponding handler, if previously executed instructions are under the user mode application, then the transformation process also occurs naturally from user mode to kernel mode switch. For example, after a disk read/write operation is complete, the system switches to the disk read/write interrupt handler for subsequent operations.

These three ways are the most important ways for the system to transition from user mode to kernel mode at runtime, in which the system call can be considered to be actively initiated by the user process, while exceptions and peripheral interrupts are passive.

5. How much process space is allocated by the operating system? What can threads share?

Stack – Automatically allocated by the compiler to hold function parameter values, local variable values, and so on. White fuck heap – usually allocated and released by the programmer. If not released by the programmer, it may be reclaimed by the OS at the end of the program.

Static area – Storage for global variables and static variables

Code area (TEXT) – The binary code that holds the function body.

Threads share heap area, static area

6. Advantages and disadvantages of operating system memory management mode, paging segmentation and segmental page type

Storage management mode: block management, page management, section management, section page management

Section management:

In segment storage management, the program address space is divided into several segments, such as code segment, data segment, stack segment; Thus each process has a two-dimensional address space, independent of each other. The advantage of segment management is that there is no internal fragmentation (because the segment size is variable, change the segment size to eliminate internal fragmentation). However, when a segment is swapped in and out, external fragments will be generated (for example, when a 4k segment is swapped for a 5K segment, 1K external fragments will be generated).

Page management: In page storage management, the logical address of the program is divided into pages of fixed size, while the physical memory is divided into page frames of the same size. When the program is loaded, any page can be put into any page frame in the memory. These page frames do not have to be continuous, so as to achieve discrete separation. The advantage of page storage management is that there is no external fragmentation (because the page size is fixed), but there is internal fragmentation (a page may not be filled enough)

Segment-page management:

The segment-page management mechanism combines the advantages of segment-page management and segment-page management. To put it simply, the pager management mechanism divides main memory into several segments, and each segment is divided into several pages. That is to say, the contents between the middle segment and the segment as well as within the segment are discrete.

What page replacement algorithm, FIFO why not good? How to improve? LRU thought, handwriting LRU

Replacement algorithm: FIFO, LRU not used recently, OPT is the best replacement algorithm

FIFO:

How it works: Eliminates the page replacement algorithm that resides the longest in memory

Advantages: simple and intuitive

Disadvantages: not taking into account the actual page usage frequency, poor performance, inconsistent with the rules of common page use, less practical application

Improvements: Add an R bit to each page, starting at the head of the list each time, if R is set, clear the R bit and put the page node at the end of the list; If R is 0, then it’s too old and too useless.

Most recently unused LRU:

Principle: Select the most recent and longest unused page for elimination

Advantages: considering the time locality of program access, it has better performance and more practical applications

Disadvantages: implementation needs more hardware support, will increase some hardware cost white whoring data handwritten LRU

/ * * *@program: Java
 * @description: LRU most recently not using the substitution algorithm, implemented via LinkedHashMap *@author: Mr.Li
 * @create: not * * / 2020-07-17
public class LRUCache {
    private LinkedHashMap<Integer,Integer> cache;
    private int capacity;   // Capacity size

    /** * initialize the constructor *@param capacity
     */
    public LRUCache(int capacity) {
        cache = new LinkedHashMap<>(capacity);
        this.capacity = capacity;
    }

    public int get(int key) {
        // This key does not exist in the cache
        if(! cache.containsKey(key)) {return -1;
        }

        int res = cache.get(key);
        cache.remove(key);   // Delete from the list first
        cache.put(key,res);  // Put the node at the end of the list
        return res;
    }

    public void put(int key,int value) {
        if(cache.containsKey(key)) {
            cache.remove(key); // It already exists and is removed from the current list
        }
        if(capacity == cache.size()) {
            // Cache is full, delete the link header
            Set<Integer> keySet = cache.keySet();
            Iterator<Integer> iterator = keySet.iterator();
            cache.remove(iterator.next());
        }
        cache.put(key,value);  // Insert to the end of the list}}/ * * *@program: Java
 * @description: LRU has not recently used a replacement algorithm for the longest time, implemented via the LinkedHashMap internal removeEldestEntry method *@author: Mr.Li
 * @create: the 2020-07-17 10:59 * * /
class LRUCache {
    private Map<Integer, Integer> map;
    private int capacity;
	
    /** * initialize the constructor *@param capacity
     */
    public LRUCache(int capacity) {
        this.capacity = capacity;
        map = new LinkedHashMap<Integer, Integer>(capacity, 0.75 f.true) {
            @Override
            protected boolean removeEldestEntry(Map.Entry eldest) {
                return size() > capacity;  // If the capacity is greater than capacity, it will be deleted}}; }public int get(int key) {
        // Return the value of the key. If there is no value, return -1
        return map.getOrDefault(key, -1);
    }

    public void put(int key, int value) { map.put(key, value); }}Copy the code

Optimal replacement algorithm OPT:

Principle: Each time select the page in the current physical block will not be accessed for a long time in the future or will not be used in the future to eliminate the page

Advantages: With good performance, can ensure the lowest page – missing rate

Cons: Too idealistic, but practically impossible (no way to predict future pages)

Deadlock condition, solution.

Deadlock of white piao data refers to the phenomenon of two or more processes waiting for each other due to competing for resources in the execution process.

Deadlock conditions:

Mutually exclusive conditions: A process does not allow other processes to access the allocated resource. If other processes access the resource, they can only wait until the process that occupies the resource releases the resource.

Request and hold conditions: After a process obtains a certain resource, it sends a request for another resource. However, the resource may be occupied by another process. In this case, the request is blocked, but the process does not release the occupied resource

Non-deprivation condition: a resource acquired by a process cannot be deprived until it has been used and can only be released after use

Circular waiting condition: several processes in the system form a loop in which each process is waiting for resources occupied by its neighbor

Solution: Break any condition of the deadlock

Resources are allocated at once, depriving request and hold conditions

Deprivable resource: that is, when the process’s new resource is not satisfied, the occupied resource is released, thus breaking the inalienable condition

Orderly allocation of resources: The system assigns a serial number to each type of resources, and each process incrementally requests resources by serial number. Releasing resources is the opposite, thus destroying the condition of loop waiting

In the end, I wish you all success as soon as possible, get satisfactory offer, fast promotion and salary increase, and walk on the peak of life. If you can please give me a triple support for me yo, we will see you next time

White piao data