The operating system

Modern operating systems consist of one or more processors, main memory, printers, keyboards, mice, displays, network interfaces, and various input/output devices. The computer operating system is a complex system.

However, programmers will not deal directly with the hardware, but they may each programmers don’t grasp all the details of the operating system, so we don’t have to write code, so on the basis of hardware, computer installed a layer of software, this software can respond to user input by the instruction to achieve the effect of control hardware, to meet user requirements, The software is called an operating system, and its job is to provide a better, simpler, clearer computer model for user programs.

The most common operating systems are Windows, Linux, FreeBSD or OS X. The Graphical User Interface is called GUI. The text-based command line is often called a Shell. Here are the components of the operating system we will explore

This is a simplified diagram of the operating system. At the bottom is the hardware. The hardware includes the chips, circuit boards, disks, keyboards, monitors, and other devices we mentioned above, and on top of that is the software. Most computers have two operating modes: kernel mode and user mode. The most basic part of software is the operating system, which runs in kernel state. Kernel state is also called kernel state and kernel state. The operating system has access to the hardware and can execute any instructions the machine can run. The rest of the software runs in user mode.

User interface programs (shells or GUIs) are in the user state, and they are at the lowest level of the user state, allowing the user to run other programs, such as Web browsers, E-mail readers, music players, and so on. Also, applications that are closer to user mode are easier to write. If you don’t like an email reader you can rewrite it or change it, but you can’t write your own operating system or interrupt handler. This program is protected by hardware from external modification.

Introduction to Computer Hardware

The operating system is closely related to the kernel hardware that runs the operating system. The operating system extends the computer’s instruction set and manages the computer’s resources. Therefore, the operating system must know enough about the operation of hardware, and here is a brief introduction to computer hardware in modern computers.

Conceptually, a simple PERSONAL computer can be abstracted into a similar model like the one above, where CPU, memory, and I/O devices are all connected in series to a bus and communicate with other devices through the bus. Modern operating systems have much more complex structures, with many buses designed, as we will see later. For the time being, this model satisfies our discussion.

CPU

The CPU is the brain of the computer. It mainly interacts with memory, extracting instructions from memory and executing them. A CPU’s execution cycle is to extract the first instruction from memory, decode it and determine its type and operands, execute it, and then extract and decode subsequent instructions. Repeat the loop until the program is finished.

Each CPU has a specific set of instructions that it can execute. Therefore, x86 cpus cannot execute ARM programs and ARM cpus cannot execute x86 programs. Because accessing memory for execution or data takes longer than executing instructions, all cpus contain internal registers to hold key variables and temporary results. Therefore, in the instruction set there are usually instructions for loading keywords from memory into registers and storing keywords from registers into memory. Other instructions combine operands from registers and memory, such as the add operation, which adds the two operands and saves the result to memory.

In addition to the general-purpose registers used to hold variables and temporary results, most computers have several special registers that are visible to the programmer. One of these is the program counter, which indicates the address of the next instruction to be fetched from memory. After the instruction is extracted, the program counter is updated to the next address to be extracted.

The other register is the stack pointer, which points to the top of the current stack in memory. The stack pointer contains relevant parameters during input, local variables, and temporary variables that are not stored in registers.

Another register is the PSW(Program Status Word) Program Status Word register, which is an 8-byte (64-bit) long data set maintained by the operating system. It tracks the current state of the system. We can ignore PSW unless system termination occurs. User programs can usually read the entire PSW, but can usually only write to some of its fields. PSW plays an important role in system calls and I/O.

The operating system must know all the registers. In time multiplexing cpus, the operating system often stops running one program to run another. Each time the operating system stops running a program, the operating system saves the values of all registers so that the program can be restarted later.

To improve performance, CPU designers have long abandoned reading, decoding, and executing a single instruction at the same time. Many modern cpus have mechanisms for reading multiple instructions simultaneously. For example, a CPU may have separate access, decoding, and execution units, so that when the CPU executes the NTH instruction, it can decode N + 1 instructions and read N + 2 instructions. Organizations like this are called pipelines,

A more advanced design than the pipeline is the Superscalar CPU, and the following is a superscalar CPU design

In the above design, there are multiple execution units, for example, one for integer arithmetic, one for floating-point arithmetic, and one for Booleans. Two or more instructions are fetched, decoded, and placed in a buffer at once until they are finished executing. Whenever an execution unit is free, the buffer is checked to see if any instructions can be executed. If so, the instruction is fetched from the buffer and executed. The implication of this design is that applications usually execute out of order. In most cases, the hardware is responsible for ensuring that the result of this operation is the same as when the instructions are executed sequentially.

With the exception of very simple cpus used in embedded systems, most cpus come in two modes, namely the kernel and user modes mentioned earlier. Typically, a binary in the PSW register controls whether the current state is kernel or user. When running in kernel mode, the CPU can execute any instruction set and use hardware functions. On desktops and servers, operating systems typically run in kernel mode, giving access to complete hardware. In most embedded systems, part of the system runs in kernel mode and the rest runs in user mode.

User applications typically run in user mode, where the CPU can execute only a portion of the instruction set and access only a portion of the hardware’s functionality. In general, all instructions related to I/O and memory protection are disabled in user mode. Of course, setting PSW mode bits to kernel mode is also forbidden.

To obtain services from the operating system, user programs must use system calls, which are converted to kernel-state and invoke the operating system. TRAP commands are used to switch the user mode to kernel mode and enable the operating system. When the work is complete, the instructions following the system call give control to the user program. We’ll explore the details of operating system calls later.

It is important to note that there are pitfalls when operating systems make system calls. Most traps cause hardware to issue warnings, such as trying to divide by zero or floating point underflow. In all cases, the operating system takes control and decides how to handle exceptions. Sometimes the program has to be stopped for reasons that go wrong.

Multithreaded and multi-core chips

The Intel Pentinum 4, or Pentium processor, introduces a feature called multithreading, or Hyperthreading, that x86 processors and some other CPU chips do. Includes SSPARC, Power5, Intel Xeon, and Intel Core series. Roughly speaking, multithreading allows the CPU to hold two different thread states and complete the switch at the nanosecond level. Threads are lightweight processes, as we’ll discuss later. For example, if a process wants to read instructions from memory (which typically takes several clock cycles), a multithreaded CPU can switch to another thread. Multithreading does not provide true parallel processing. Only one process is running at a time.

Multithreading makes sense to an operating system because each thread is like a single CPU to the operating system. For example, an operating system with two cpus, each running two threads, would be four cpus for the operating system.

In addition to multithreading, many CPU chips now have four, eight or more complete processors or cores on them. A multicore chip effectively hosts four microchips on top of it, each with its own independent CPU.

When it comes to the absolute number of cores, nothing beats modern Graphics Processing units (Gpus), which are processors made up of thousands of microcores. They are good at handling large numbers of parallel, simple calculations.

memory

The second major component of a computer is memory. Ideally, memory should be fast (faster than executing an instruction, so as not to slow down CPU performance), large enough and cheap enough, but current technology can’t meet all three needs. Therefore, different processing methods are adopted, and the memory system adopts a hierarchical structure

The top tier has the highest memory speed, but the smallest capacity, and the cost is very high. The lower the hierarchy, the slower the access efficiency, the larger the capacity, but the cheaper the cost.

register

At the top of the memory are the registers in the CPU. They are made of the same material as the CPU, so they are as fast as the CPU. The program must manage these registers itself in the software (that is, decide how to use them)

The cache

Below the registers is the cache, which is mostly controlled by the hardware. Main memory is divided into cache lines of 64 bytes, memory addresses 0-63 for cache line 0, addresses 64-127 for cache line 1, and so on. The most frequently used cache rows are held in caches that are located inside or very close to the CPU. When an application needs to read a keyword from memory, the cache hardware checks to see if the desired cache line is in the cache. If so, it is a cache hit. The cache satisfied the request and did not send the memory request across the bus to main memory. A cache hit typically takes two clock cycles. A cache miss needs to be fetched from memory, which can take a lot of time. The cache line limits capacity because it is very expensive to build. Some machines have two or three cache levels, each slower and larger than the previous one.

Caching plays an important role in many areas of the computer, not just the RAM cache rows.

Random access memory (RAM) : The most important type of memory from which data can be either read or written. When the machine is shut down, information in memory is lost.

A large number of available resources are divided into smaller pieces, some of which are used more frequently than others, and caching is often used to improve performance. Operating systems use caching all the time. For example, most operating systems keep (partially) frequently used files in host memory to avoid repeated retrieval from disk. For example, is similar to/home/ast/projects/minix3 / SRC/kernel/clock. C file path name into this field in the disk address can also be saved in the cache, the result of the address to avoid duplication. In addition, when the address of a Web page (URL) is translated into a network address (IP address), the result can be cached for future use.

In any caching system, there are several issues that need to be addressed

  • When to put new content into the cache
  • Which line in the cache should the new content be placed in
  • Which piece of content should be removed from the cache when free space is needed
  • Where the removed content should be placed in some larger store

Not every problem is related to every cache situation. For main memory cache rows in the CPU cache, new content is called when there is a cache miss. The cache rows that should be used are usually computed by the higher part of the referenced memory address.

Caches are a good way to solve problems, so modern cpus design two types of caches. The first level cache, or L1 cache, is always located inside the CPU and is used to load decoded instructions into the CPU’s execution engine. Most chips have a second L1 cache for frequently used keywords. A typical L1 cache size is 16 KB. In addition, there is usually a second level cache, called L2 cache, which stores recently used keywords, usually in megabytes. The biggest difference between L1 cache and L2 cache is whether there is latency. Access to L1 cache is not delayed at all, while access to L2 cache is delayed by 1-2 clock cycles.

What is a clock cycle? The speed of the computer processor or CPU is determined by the clock period, which is the amount of time between two pulses of the oscillator. In general, the higher the number of pulses per second, the faster a computer processor can process information. Clock speed is measured in Hz, usually in megahertz (MHz) or gigahertz (GHz). For example, a 4 GHz processor executes 4,000,000,000 clock cycles per second.

Depending on the type of processor, a computer processor can execute one or more instructions per clock cycle. Early computer processors and slower cpus could only execute one instruction per clock cycle, while modern processors can execute multiple instructions per clock cycle.

Main memory

At the next level in the hierarchy above is main Memory, which is the main force of Memory systems. Main Memory is often called Random Access Memory (RAM), sometimes referred to as core Memory in the old days because computers of the 1950s and 1960s used tiny magnetized ferrite cores as the main Memory. All memory access requests that can no longer be satisfied in the cache are forwarded to main memory.

In addition to main memory, many computers have a small amount of non-volatile random access memory. Unlike RAM, non-volatile random-access memory does not lose its contents after a power failure. The contents of ROM(Read Only Memory) cannot be modified once stored. It’s very fast and cheap. (If anyone asks you if there’s a fast and inexpensive memory device, it’s a ROM.) In a computer, the boot-loading module (also known as bootstrap) used to start the computer is stored in ROM. In addition, some I/O cards also use ROM to handle the underlying device control.

EEPROM(Electrically Erasable PROM,) and flash memory are also non-volatile, but as opposed to ROM, they can be erased and rewritten. However, rewriting them takes more time than writing to RAM, so they are used in the same way as ROM, but unlike ROM they can override fields to correct errors in the program.

Flash memory is also commonly used as a storage medium for portability. Flash is the film in a digital camera and the disk in a portable music player. Flash memory is halfway between RAM and disk speed. Also, unlike disk memory, flash memory can wear out if it is erased too many times.

Another category is CMOS, which is volatile. Many computers use CMOS memory to keep the current time and date.

disk

The next tier is disks (hard drives), which cost two orders of magnitude less per bit than RAM and often have two orders of magnitude more capacity. The only problem with disks is that random data access times are about three orders of magnitude slower. The reason why the disk access is slow is that the disk structure is different

A disk is a mechanical device in which there are one or more metal discs in a disk that rotate at speeds of 5,400rpm, 7,200rpm, 10,800rpm or higher. From the edge there is a mechanical arm suspended across the disc, similar to the pickup arm on an old vinyl 33 turntable. The information is written on a series of concentric circles on the disk. At the position of any given arm, each head can read a circular region called a track. A cylinder is formed by combining all tracks at the position of a given arm.

Each track is divided into sectors with a value of 512 bytes. In modern disks, there are more sectors than the outer cylinder compared to the inner cylinder. It takes about 1ms for a manipulator to move from one cylinder to an adjacent cylinder. The typical time for a random shift to a cylinder is 5ms to 10ms, depending on the driver. Once the arm is on the correct track, the drive must wait for the desired sector to rotate beneath the head and begin reading and writing, with rates of 50MB/s for low-end disks and 160MB/s for high-speed disks.

Solid-state disks (SSDS) are not disks. They do not have moving parts, nor do they look like records. Data is stored in flash memory, which is similar to disks only in that they store a large amount of data that is not lost even when the power is turned off.

Many computers support a well-known virtual memory mechanism that makes the storage expected to run larger than the actual physical storage. The method is to place the program on disk and use main memory as a partial cache for the most frequently used parts of the program. This mechanism requires a quick image of the memory address, which is used to translate the address generated by the program into the physical address of the relevant byte in RAM. This mapping is done by a part of the CPU called the Memory Management Unit (MMU).

The advent of caching and MMU has a significant impact on system performance. In a multi-program system, the mechanism of switching from one program to another is called context switch. It is necessary to modify the resources from the cache and write them back to disk.

I/O devices

CPU and memory are not all that an operating system needs to manage. I/O devices are also closely related to the operating system. As you can refer to the picture above, an I/O device generally consists of two parts: the device controller and the device itself. A controller itself is a chip or group of chips that can control a physical device. It can receive instructions from the operating system, for example, to read and process data from the device.

In many cases, the actual process of controlling the device is very complex and involves a lot of detail. So the controller’s job is to provide a simpler (but still very complex) interface to the operating system. That is, shielding physical details. Any complex thing can be solved by adding a layer of agents, which is a universal solution of computer or human society

The other part of the I/O device is the device itself, which has a relatively simple interface because interfaces don’t do much and have been standardized. For example, after standardization, any SATA disk controller can be adapted to any SATA disk, so standardization is necessary. ATA stands for AT Attachment, while SATA stands for Serial ATA.

AT what? It was an advanced technical product of IBM’s second-generation PERSONAL computer, using the 6MHz 80286 processor introduced in 1984, the most powerful processor available at the time.

Words like advanced should be used sparingly, or you may look back 20 years from now and be slapped in the face.

SATA is now the standard hard disk interface on many computers. Because the actual device interface is hidden in the controller, the operating system sees the interface to the controller, which is quite different from the device interface.

Each type of device controller is different, so different software is required to control it. The software that communicates with the controller, issues commands, processes commands and receives responses is called device driver. Each controller manufacturer should provide different device drivers for different operating systems.

In order for the device driver to work, it must be installed on the operating system so that it can run in kernel mode. There are generally three ways to load device drivers into the operating system

  • The first approach is to reconnect the kernel to the device launcher and then reboot the system. This is aUNIXThe way the system works
  • The second approach is to set up an entry in an operating system file, notify the file that a device driver is required, and then restart the system. On system reboot, the operating system looks for the device launcher and loads it, which isWindowsThe mode of operation adopted
  • A third approach is the ability of the operating system to receive new device drivers at run time and install them immediately, without rebooting the operating system, which is rare but becoming more common. Hot-swappable devices such as USB and IEEE 1394 require dynamically loadable device drivers.

Each device controller has a small number of registers for communication, for example, a minimal disk controller will also have registers for specifying disk addresses, memory addresses, and sector counts. To activate the controller, the device driver takes an instruction from the operating system, translates it into the corresponding value, and writes it to the device registers, all of which together make up the I/O port space.

On some computers, device registers are mapped to the operating system’s available address space, allowing them to perform reads and writes as well as memory. In such a computer, there is no need for specialized I/O instructions, and user programs can be blocked by hardware from touching these memory addresses (for example, using base address registers and indexed registers). In other computers, the device registers are placed into a dedicated I/O port space, and each register has a port address. On these computers, special IN and OUT instructions are enabled IN kernel mode, which allows device drivers and registers to read and write. The first approach restricts specific I/O instructions but allows some address space; The latter requires no address space but special instructions, and both are widely used.

There are three ways to implement inputs and outputs.

  • In the simplest form, a user program makes a system call, which the kernel translates into a program call from the corresponding driver, and the device driver starts I/O and loops through the device to see if it has finished its work (typically with bits to indicate that the device is still busy). When the I/O call is complete, the device driver sends the data to the specified location and returns it. The operating system then hands control to the caller. This approach is calledBusy waitingThe disadvantage of this approach is that the CPU is constantly occupied and the CPU polls the I/O device until the I/O operation completes.
  • In the second way, the device driver starts the device and interrupts it when the operation completes. The device driver returns at this point. The operating system then blocks the caller and schedules other work as needed. When the device driver detects that the device operation is complete, it issues oneinterruptNotification operation completed.

Interrupts are very important in operating systems, so this needs to be discussed in more detail.

As shown in the figure above, this is a three-step I/O process. First, the device driver tells the controller what to do by writing to the device registers. The controller then starts the device. When the controller has finished reading or writing the bytes it has been told to transfer, it uses some bus in Step 2 to send a signal to the interrupt controller. If the interrupt controller is ready to receive an interrupt signal (it may not if it is busy with a higher-priority interrupt), it declares it on one of the CPU pins. That’s step 3

In step 4, the interrupt controller puts the number of the device on the bus so that the CPU can read the bus and know which device completed the operation (possibly multiple devices running at the same time).

Once the CPU decides to interrupt, the program counter and PSW are pushed onto the current stack and the CPU switches to kernel mode. The device number can be used as a reference to memory to find the address of the device interrupt handler. This part of memory is called an interrupt vector. Once the interrupt handler (part of the device driver for the interrupt device) has started, it removes the program counter and PSW registers from the stack, saves them, and queries the status of the device. When the interrupt handler is complete, it returns to the first instruction that the previous user program has not yet executed, as follows

  • The third way to implement I/O is with special hardware:Direct Memory Access (DMA)The chip. It can control the bit flow between memory and certain controllers without CPU intervention. The CPU sets up the DMA chip, specifying the number of bytes to transmit, the relevant device and memory addresses, and the direction of operation. When the DMA chip completes, it causes an interrupt, as described above. We will discuss the interrupt process in detail later

Interrupts can (and often do) occur at inappropriate times while another interrupt handler is running. Therefore, the CPU can disable interrupts, and it can restart interrupts later. After the INTERRUPT is turned off by the CPU, any device that has issued an interrupt can continue processing the interrupt signal, but the CPU will not interrupt until the interrupt is enabled again. If more than one device has already signaled an interrupt when the interrupt is turned off, the interrupt controller decides which interrupt to handle first, usually depending on the priority given to each device in advance. The device with the highest priority wins the interrupt and the others must wait.

The bus

The structure above (a diagram of the components of a simple PERSONAL computer) has been used in minicomputers for many years and was used in early IBM PCS. However, as processor core memory became faster, the ability of a single bus to handle all requests reached its limits, including the IBM PC bus. This model must be abandoned. The result has been the emergence of other buses that handle I/O devices and cpu-to-memory faster. The result of this evolution is the following structure.

The x86 system in the figure above contains many buses, cache, memory, PCIe, PCI, USB, SATA, and DMI, each with different transfer rates and functions. The operating system must understand all bus configuration and management. The main bus is the Peripheral Component Interconnect Express bus.

The PCIe bus invented by Intel is also the successor of the old PCI bus, which was set up to replace the antique level ISA(Industry Standard Architecture) bus. With tens of gigabytes per second of capacity, PCIe is much faster than its predecessor, and they are fundamentally different. Until 2004, when PCIe was invented, most buses were parallel and shared. Shared bus architeture means that multiple devices use some of the same wires to transmit data. So when multiple devices are sending data at the same time, you need a decision maker to decide who can use the bus. PCIe, on the other hand, uses dedicated end-to-end links. The Parallel Bus Architecture used in traditional PCI represents sending the same data word over multiple wires. For example, on a traditional PCI bus, a single 32-bit piece of data is sent over 32 parallel wires. PCIe, on the other hand, uses a serial bus architecture and sends all bits of a message over a single connection, called a channel, just like a network packet. This simplifies things a lot, as it no longer ensures that all 32-bit data reaches the same destination exactly at the same time. Parallelism can still be exploited efficiently by parallel data paths. For example, 32 messages can be transferred in parallel using 32 data channels.

In the preceding figure, cpus communicate with dimMs through DDR3 buses, gpus through PCIe buses, and other devices through the integration center through DMI buses. The integrated control center talks to USB devices via a serial bus, to hard drives and DVD drives via a SATA bus, and to Ethernet frames via PCIe.

Not only that, but every nucleus

The Univversal Serial Bus (USB) is used to connect all slow I/O devices, such as keyboards and mice, to computers. USB 1.0 can handle a total load of 12 Mb/s, while USB 2.0 increases the bus speed to 480Mb/s and USB 3.0 can reach speeds of no less than 5Gb/s. All USB devices can connect directly to the computer and start working immediately, rather than requiring a reboot.

SCSI(Small Computer System Interface) bus is a high-speed bus used for high-speed hard disks, scanners, and other devices that require a large bandwidth. Today, they are mostly used in servers and workstations, with speeds of up to 640MB/s.

Computer startup process

So with some of the above hardware and the support of the operating system, our computer can start to work, so what is the computer boot process? The following is a brief version of the startup process

In each computer there is a parent board, also known as the motherboard, motherboard is also the motherboard, it is the most basic computer is one of the most important parts. The motherboard is generally rectangular circuit board, which is installed on the main circuit system of the computer. There are BIOS chips, I/O control chips, keyboard and panel control switch interface, indicator plug, expansion slot, motherboard and plug card dc power supply plug and other components.

On the motherboard is a program called Basic Input Output System (BIOS). The basic I/O software includes keyboard reading, screen writing, disk I/O, and other processes in the BIOS. Today, it is stored in flash memory, it is non-volatile, but can be updated by the operating system when errors are found in the BIOS.

When the computer boots up (booted), the BIOS is turned on and first checks to see how much RAM is installed and whether the keyboard and other infrastructure are installed and responding properly. Next, it scans the PCIe and PCI buses and finds all the devices connected to them. Plug-and-play devices are also recorded. If the existing device is different from the one that was last booted, the new device will be reconfigured.

After blue, the BIOS tries to start the device by trying the list of devices stored in the CMOS memory

CMOS stands for Complementary Metal Oxide Semiconductor. It refers to a technique used to make lsi chips, or chips made with this technique. It is a piece of RAM chip that can be read and written on a computer motherboard. Because of the read-write characteristics, so in the computer motherboard used to save BIOS set up computer hardware parameters after the data, this chip is only used to store data.

And the setting of each parameter in BIOS should pass special program. BIOS setup procedures are generally integrated by the manufacturer in the chip, in the boot through a specific key can enter the BIOS setup procedures, easy to set the system. So BIOS Settings are sometimes called CMOS Settings.

After the system starts, you can enter a BIOS configuration program to modify the device list. It then determines whether it can boot from an external CD-ROM and USB driver, and if it fails (i.e., it doesn’t), the system will boot from the hard disk, and the first sector in the Boots device will be read into memory and executed. This sector contains a program that typically checks the partition table at the end of the boot sector to determine which partition is active. A second boot loader is then read from that partition, which reads the operating system from the active partition and starts it.

The operating system then asks the BIOS for configuration information. For each device, the presence of device drivers is checked. If not, the user is asked if they want to insert a CD-ROM drive (provided by the device manufacturer) or download it from the Internet. Once you have device drivers, the operating system loads them into the kernel, initializes the tables, creates the required daemons, and launches the login program or GUI.

Operating System Museum

Operating systems have been around for the better part of a century. During this time, there have been various types of operating systems, but not all of them are well known. Here are some of the more famous ones

Mainframe operating systems

At the higher end are mainframe operating systems, the large operating systems that can be found in the data centers of large companies. These computers don’t have the same I/O capacity as PCS. It’s not unusual for a mainframe computer to have 1,000 disks and millions of gigabytes of capacity, and it would be the envy of friends to have such a PC. Mainframes are also on high-end Web servers, large e-commerce service sites.

Server operating system

The next level is the server operating system. They run on servers, which can be large personal computers, workstations or even mainframes. They serve several users over a network and allow users to share hardware and software resources. The server can provide print services, file services, or Web services. Internet service providers run a number of server machines to support users, enabling Web sites to hold Web pages and process incoming requests. Typical Server operating systems are Solaris, FreeBSD, Linux, and Windows Server 201x

Multiprocessor operating system

An increasingly common way to gain large computing power is to connect multiple cpus to a single system. Depending on how they are connected and shared, these systems are called parallel computers, multicomputers, or multiprocessors. They require a dedicated operating system, but typically the operating system is a variation of the server operating system with specialized features such as communication, connectivity, and consistency.

With the recent advent of multi-core chips in personal computers, conventional desktop and laptop operating systems are also starting to work with smaller multiprocessors, and the number of cores is evolving with The Times. Many major operating systems such as Windows and Linux can run on multi-core processors.

Personal computer system

The next category is personal computer operating systems. Modern PC operating systems support multiprocessing. At startup, there are usually dozens of programs running that function to provide good support for a single user. Such systems are widely used in word processing, spreadsheets, gaming, and Internet access. Common examples are Linux, FreeBSD, Windows 7, Windows 8, and Apple’s OS X.

Palm computer operating system

As hardware gets smaller and smaller, we see tablets, smartphones and other handheld computer systems. A PDA, or Personal Digital Assistant (PDA), is a small computer that can be held and operated in the hand. The market is already dominated by Google’s Android and Apple’s IOS.

Embedded operating system

An embedded operating system is designed to run on a computer that controls a device that is not a normal computer and does not allow users to install software. Typical examples are devices such as microwave ovens, cars, DVD burners, mobile phones and MP3 players. All software runs in ROM, which means there is no protection between applications, allowing for some simplification. The main embedded systems are Linux, QNX and VxWorks

What about the sensor node operating system

There are many uses for configuring networks of tiny sensor nodes. These nodes are tiny computers that can communicate with each other and use a wireless communication base station. Such sensor networks can be used for building perimeter protection, homeland border protection, forest fire detection, temperature and precipitation measurements for weather forecasting, etc.

Each sensor node is a physical computer with CPU, RAM, ROM, and one or more environmental sensors. A node runs a small but real operating system, usually event-driven, that responds to external events.

Real time operating system

Another category of operating systems are real-time operating systems, which are characterized by the use of time as a key parameter. For example, in an industrial process control system, a real-time computer in a factory must collect data about the production process and control the machine with that data. If an action must take place at a specified time, it is a hard real-time system. You can see many of these systems in industrial control, civil aviation, military, and similar applications. The other type of system is soft real-time, in which occasional violations of deadlines are not desirable but are acceptable without causing any permanent damage. Digital audio or multimedia systems are such systems. Smartphones are also soft real-time systems.

Smart card operating system

The smallest operating systems run on smart cards. A smart card is a credit card that contains a CPU chip. It has very strict limits on operating energy consumption and storage space. Some cards have single-item functions, such as electronic payments; Some smart cards are Java oriented. This means that there is a Java Virtual Machine (JVM) interpreter in the ROM of the smart card.

Operating System Concepts

Most operating systems provide specific basic concepts and abstractions, such as processes, address Spaces, files, and so on, that are the core to understand. We will briefly introduce some of the basic concepts below. To illustrate these concepts, we will present examples from UNIX from time to time, and the same examples will also exist on other systems, which we will cover later.

Article Reference:

Modern operating systems, fourth edition

baike.baidu.com/item/ Operating system /1…

Modern Operating System Forth Edition

Faculty.cs.niu.edu/~hutchins/c…

www.computerhope.com/jargon/c/cl…

Station B – Operating System

www.bilibili.com/video/av955…