Autumn recruitment escort series of articles are constantly updated, the previous articles are as follows, you can help yourself:
- HTML, CSS: Autumn recruitment escort – HTML, CSS
- Js: Autumn recruitment escort — JS interview part 1, autumn recruitment escort — JS interview part 2
- Browser: autumn recruitment escort – browser
- Vue: Autumn Recruitment escort — VUE chapter
- Computer network: Autumn recruitment escort — computer network
- Operating system: Autumn recruitment escort – Operating system chapter
- Database: Autumn recruitment escort — database chapter
The operating system
As a basic course, you need to understand common knowledge
Basic Concepts:
- The concept, features, and services of an operating system
- Privileged instruction, non-privileged instruction, operating system kernel
- Interrupts and exceptions
- The system calls
Process management:
- Definition, composition, organization, and characteristics of a process
- Process states and transitions (5 states, 7 states)
- Process communication: shared storage, pipe communication, messaging
- The difference between processes and threads
- Processor scheduling: advanced scheduling (job scheduling), intermediate scheduling (virtual storage technology), low-level scheduling (process scheduling)
- Process scheduling algorithm: FCFS, short process priority scheduling, time wheel scheduling algorithm, inalienable priority scheduling algorithm, dealienable priority scheduling algorithm
- Synchronization and mutual exclusion of processes
- The realization of process mutual exclusion: software level, hardware level, semaphore, pipe process
- Deadlock concept, occurrence condition, occurrence time
- Deadlock handling methods: deadlock prevention, deadlock avoidance (banker algorithm), deadlock detection and release
Storage management:
- Responsible for memory space allocation and reclamation
- Provide a technology to logically expand memory space (virtual memory technology)
- Address conversion function, logical address to physical address conversion, programmers only care about the change in the logical address
- Absolute load: the compiler is responsible for address translation (single batch, no operating system)
- Relocatable loading: The loader is responsible for address translation (early multi-batch processing phase)
- Dynamic runtime loading: address translation at runtime (modern operating systems)
- The OPERATING system (OS) must provide memory protection to ensure that each process runs in its own storage space and does not interfere with each other
- Set the upper and lower registers
- Use relocation register and boundary address register to judge
Document management:
- How is the internal file data organized?
- How are files organized?
- From the bottom up, what features should the OS provide to make it easier for users to use?
- From the top down, how should file data be stored on external disk?
The basic concept
1. Basic concepts of the operating system
The following are the services provided by the operating system:
(1) As a system resource manager
- Processor management
- Storage management
- File management
- Equipment management
(2) As an interface between the user and the computer hardware
- The command interface
- Program interface
- GUI
(3) as the level closest to hardware
- Realize the expansion of hardware machines
2. Operating system characteristics
(1) Concurrency
- Concurrency: The simultaneous occurrence of two or more events in the same time period. Macroscopically, it is synchronous, microscopically, it is alternate.
- Parallelism: Two or more events occurring at the same time.
(2) Sharing
Resource sharing: Resources in the system can be used by multiple concurrent processes at the same time. There are two ways to share:
- Mutually exclusive sharing: Only one process can access it within a period of time.
- Simultaneous sharing: Multiple processes can access it within a period of time. Macroscopically, it is synchronous, microscopically, it is alternate.
(3) virtual
The transformation of physical entities, which actually exist, into logical counterparts, which are perceived by the user.
(4) asynchronous
The execution of the process is not continuous, but stop-and-go.
Process management
1. The concept of process
The process is composed of PCB, program segment and data segment.
- A process is an execution of a program
- Processes are independent units of system calls and resource allocation
2. Process status and transitions
Process scheduling before the introduction of switching:
Process scheduling after the introduction of switching technology:
3. Process communication
Processes are independent resource allocation units, and communication is required for information exchange between processes.
(1) Shared storage
- In this way, the process’s access to space must be mutually exclusive
(2) Pipeline communication
- Pipe communication is half duplex. If you want full duplex, you need two pipes
- Each process is mutually exclusive to access the pipe
(3) Message delivery
- This is done by the send message/receive message primitive provided by the operating system
4. Processes and threads
The early system is a single batch, which causes system resources to be idle. After the introduction of the operating system provides process management and control, so you can run multiple programs concurrently. A process may also need to run concurrently. For example, QQ requires both video and chat functions at the same time, so a lightweight process, namely thread, appears.
- Process is the smallest unit of system resource allocation, thread is the smallest unit of CPU scheduling
- Threads are lightweight processes that can share resources with threads of the same process
- Threads increase concurrency and cost less than processes.
5. Processor scheduling
Because of limited system resources, the processor needs to follow some sort of rule to process the order of tasks.
There are three levels of processor scheduling:
- Advanced scheduling (Job scheduling)
- Intermediate scheduling (process-oriented)
- Low-level scheduling (process scheduling)
5.1 Advanced Scheduling
Advanced scheduling is job oriented. Due to limited memory space. One or more of the backup jobs need to be selected according to certain rules to allocate resources and establish corresponding processes.
5.2 Intermediate Scheduling
Intermediate scheduling is process oriented. This involves virtual storage, where temporarily unavailable processes are moved to external storage. Wait for it to run again after the conditions in the memory, and this transfer to external storage state is called suspended state
5.3 Low-level Scheduling
A process is selected from the ready queue according to some method and policy and CPU is allocated to it
Some common process scheduling algorithms:
- First come, first served algorithm (FCFS)
- Non – deprivation priority scheduling
- Priority scheduling of deprivation
- Time Slice rotation scheduling Algorithm (RR)
- Short-process Finite Scheduling Algorithm (SPF)
Storage management
As a resource manager, the operating system is responsible for the following:
- Responsible for memory space allocation and reclamation
- Provide a technology to logically expand memory space (virtual memory technology)
- Address conversion function, logical address to physical address conversion, programmers only care about the change in the logical address
- Absolute load: the compiler is responsible for address translation (single batch, no operating system)
- Relocatable loading: The loader is responsible for address translation (early multi-batch processing phase)
- Dynamic runtime loading: address translation at runtime (modern operating systems)
- The OPERATING system (OS) must provide memory protection to ensure that each process runs in its own storage space and does not interfere with each other
- Set the upper and lower registers
- Use relocation register and boundary address register to judge
1. Memory expansion technology
1.1 Coverage Technology
Core idea: the program is divided into multiple segments, the commonly used segment resident memory, not commonly used segment needs to call into memory.
Implementation method: memory is divided into a solid dry area, and a number of coverage area.
- The segment of permanent memory into the solid dry area, after the tune out
- Infrequently used segments enter the coverage area and are called into memory when needed and out of memory when not needed
Disadvantages: the overwrite structure must be declared by the programmer, and the operating system does it automatically. Has been withdrawn from the stage of history.
1.2 Switching Technology
The design idea of swap technology: when the memory is tight, the system will temporarily swap out some processes in the memory to the external storage, and swap the processes that have the running conditions in the external storage into the memory. (Processor scheduling — Intermediate scheduling, where processes are dynamically scheduled between memory and disk, requiring the use of virtual memory technology)
Process scheduling before the introduction of switching:
Process scheduling after the introduction of switching technology:
Question:
- Where should I save the paged process in external storage?
- When should you switch?
- Which processes should be swapped out?
1. Where should I save the ejected process in external storage?
In a switching operating system, the disk is usually divided into two parts: file area and swap area.
- The file area is mainly used to store files. The main pursuit is storage space utilization, so the space management of file area adopts discrete allocation.
- Swap area takes up only a small part of the disk, where the process data is swapped out. Because of the speed of switching in and switching out, the space management of switching zone adopts continuous allocation.
- The space of the file area is allocated discretely and the space of the swap area is allocated continuously. Therefore, I/O operations of the swap area are faster.
2. When should you swap?
This can be done when many processes are running low on memory.
- If you find that many processes are running with frequent page misses, indicating memory constraints, you can swap out some processes.
- If the page loss rate decreases significantly, you can suspend the swap out.
3. Which processes should be swapped out?
Each of these scheduling strategies has its pros and cons
- Can preferentially swap out blocked processes
- You can swap out low-priority processes
Ps: PCB will be resident in memory, not out of external storage.
2. Allocation and reclamation of memory space — continuous allocation
There are two main ways to allocate memory space
- Continuous allocation management: Processes must be allocated a contiguous memory space
- Single continuous distribution
- Fixed partition allocation
- Dynamic partition allocation
- Discontinuous allocation management: A process can be allocated a discrete memory space
- Basic paging storage management
- Basic segmented storage management
- Segment-page storage management
2.1 Single continuous allocation
2.2 Fixed partition allocation
The operating system needs to establish a data structure, partition description table, to achieve the allocation and reclamation of each partition. Each entry contains the size, start address, and status of the corresponding partition (whether it is allocated or not).
Advantages: Simple implementation, no external fragmentation
Disadvantages: a. When the user program is too large, all partitions may not meet the requirements, and overwrite technology has to be used to solve the problem, which reduces performance. B. Internal fragments are generated and the memory usage is low.
2.3 Dynamic Partition Allocation
Dynamic partition allocation does not pre-partition memory, but dynamically creates partitions according to the size of the process when loading memory, so that the size of the partition is just suitable for the needs of the process.
Questions to consider:
- What data structure is used to record memory usage?
- When a number of free partitions meet the requirements, which partition should be selected for allocation?
1. What data structure is used to record memory usage?
2. When multiple free zones meet requirements, which zone should be allocated?
Operating system — Dynamic partition Allocation Algorithm
When loading a new job into memory, it is necessary to select a partition from the free partition table (or free partition chain) and allocate it to the job according to certain dynamic partition allocation algorithm. We’ll look at four dynamic partitioning algorithms later:
- First Fit algorithm
- Best Fit Algorithm
- Worst Fit algorithm
- Proximity Adaptation Algorithm (Next Fit)
3. Allocation and reclamation of memory space — discontinuous allocation
There are two main ways to allocate memory space:
- Continuous allocation management: Processes must be allocated a contiguous memory space
- Single continuous distribution
- Fixed partition allocation
- Dynamic partition allocation
- Discontinuous allocation management: A process can be allocated a discrete memory space
- Basic paging storage management
- Basic segmented storage management
- Segment-page storage management
Above we studied continuous allocation, which has some disadvantages
- Fixed partition allocation: inflexibility, high internal fragmentation and low memory utilization
- Dynamic partition allocation: There is a lot of external fragmentation that can be addressed using “compact” techniques, but the “compact” techniques are time consuming
3.1 Paging storage management
Core idea: Split the process into different size parts according to the size of the fixed partition. When fixed partitions are small, the internal fragmentation is small, which improves memory utilization.
Question to consider: How to convert a logical address to a physical address?
To recall, the conversion of logical addresses to physical addresses in continuously allocated memory is calculated by the module’s start address in memory + the offset of the target cell relative to the start address.
Or consider that in paging, it’s similar to sequential allocation. The difference is that in paging the process is divided into multiple segments and stored discontinuously in different memory. The specific process is as follows:
- Calculate the page number corresponding to the logical address:
Page number = logical address/page length
- Find the starting address of the page number in memory:
Check the page table
- Calculate the logical address offset on the page:
Offset = logical address % page length
- Physical address = Start page address + offset of the page
Logical address structure: page number + in-page offset
Page table: The operating system creates a page table for each process so that the starting address in memory for the corresponding page number can be queried
3.2 Segmented Storage Management
Core idea: segment is similar to paging, segment table is similar to page table. The difference is that the segment size is different, and the page size is the same.
Segmented: The program itself is divided into segments, each segment has a segment name and each segment is addressed from 0.
Memory allocation rule: Allocate memory in units of segments. Each segment occupies contiguous memory space, but the segments are not contiguous
Logical address structure: segment number + intra-segment offset
Segment table: The operating system creates a segment table for each process. In this way, you can query the start address of the corresponding segment number in memory based on the segment number
3.3 Segment-page Storage Management
Let’s take a look at the pros and cons of paging versus segmentation:
methods | advantages | disadvantages |
---|---|---|
Paging management | High memory utilization and no external fragmentation. When the page size is small, only a small number of in-page fragments are generated | It is not convenient to share and protect information according to logic |
Piecewise management | Information can be shared and protected logically | The segment length is too large and it is inconvenient to allocate too long contiguous memory space for it. Second, external fragmentation is generated |
Segment-page management: combines segment-page management with segment-page management
4. Virtual memory
4.1 Basic concepts of virtual memory
Problems with traditional storage management:
Above we learned about traditional storage management: continuous memory allocation and discontinuous memory allocation.
-
One-time: A job must be loaded into memory once before it can run.
- When the job is very large, the memory is insufficient, causing the large job to fail to run
- When a large number of jobs are required to run, the memory cannot hold all the jobs and the concurrency decreases
-
Resident: Once a job is loaded into memory, it stays in memory until the job is finished. In reality, only a small amount of data needs to be accessed at any one time, and resident memory resources are wasted.
Locality principle:
- Time locality:
- If an instruction in a program is executed, it may be executed again later
- If a piece of data is accessed, it may be accessed again soon after
- Spatial locality:
- Once a program accesses a storage unit, it is likely that nearby storage units will be accessed soon after. (Because a lot of data is stored consecutively)
Caching idea: Put recently frequently accessed data into the cache, and put temporarily unused data into the low-speed cache.
Definition of virtual memory: According to the principle of locality and the idea of caching, we can put some uncommon data into external memory.
- When accessed information is not in memory, the operating system is responsible for calling it in
- When the memory space is insufficient, the operating system is responsible for calling up
In this managed mode, the user appears to have a memory that is much larger than the actual memory, known as virtual memory.
Virtual memory features:
- Multiple: Solves the one-time problem and does not need to load all the jobs into memory at once
- Commutation: Jobs run without permanent residence and find where to store based on how often they are used. The higher the frequency is, the closer it is to the register and vice versa.
- Virtuality: Logically expands the capacity of memory.
File management
A file is a meaningful combination of information/data.
Question to consider:
- How is the internal file data organized?
- How are files organized?
- From the bottom up, what features should the OS provide to make it easier for users to use?
- From the top down, how should file data be stored on external disk?
How is the internal file data organized?
- Unstructured file: such as text
- There are structural files: such as databases
How are files organized?
Similar to a tree structure, users can create layers of directories (folders) in which files can be stored.
From the bottom up, what features should the OS provide to make it easier for users to use?
- Create file (create system call)
- Read a file (delete system call)
- Write file (read system call)
- Deleting a file (write system call)
- Open file (open system call) : The file needs to be opened before reading or writing
- Close file (close system call) : Close the file after reading and writing data
From the top down, how should file data be stored on external disk?
- Similar to memory, external storage is composed of storage units, and each storage unit corresponds to a physical address.
- External memory is also divided into blocks, each of which is the same size
1. Logical structure of the file
- No structure file
- Structured file
- Sequential file
- Index file
- Indexed sequential file
1.1 No structure file
1.2 Structure files are available
2. The disk
2.1 Disk Structure
Disk, track, sector:
Disk surface and column:
2.2 Disk Scheduling Algorithm
- First come, first served algorithm (FCFS)
- Minimum search time priority (SSTF)
- Scanning algorithm (SCAN)
- Cyclic scanning algorithm (C-SCAN)