I have not systematically studied the courses of computer operating system. Before, because I was preparing for the interview, I found that big factories would like to ask questions about this aspect, so I searched for some good articles and summarized them according to the questions of some friends. Things like the difference between a process and a thread, synchronous and asynchronous, blocking and non-blocking, deadlocks and I think it’s helpful to understand some of the things that are at the front end. Issues such as disk seek and communication between processes are also frequently examined for students who want to impact the factory.

What is the difference between a process and a thread?

There are many answers to this question on the Internet, there are also many vivid examples, but here I give my own more recognized one answer, for reference only.

In essence, processes and threads are descriptions of the CPU’s working hours, i.e., the instructions of a running program.

Background:

To explain the above statement, we first need to know some computer basics.

  1. For a CPU, the speed of its execution is very fast. This means that even though there are multiple tasks to be executed, each task is executed in rotation for the CPU, so fast that we think of it as multitasking.

  2. Before executing a program or task, the CPU needs to have all related resources ready. These tasks are all in the ready queue, and then wait for the scheduling algorithm of the operating system to select a task to be executed by THE CPU. After that, the PC pointer points to the beginning of the code, and the CPU takes instructions and executes them one by one. When the CPU executes a task, the resources required will be loaded into RAM.

  3. CPU execution process of each task here actually can be understood as a process, and the preparation of resources can be understood as needed in the process of process execution context, when a process is performed, or assigned to he ran out of the CPU time, need to save the current process context, waiting for the CPU execution next time.

So how do multiple processes execute in an orderly manner? First load the context of one process, save its context after execution, then load the context of the next process, save the context after execution… Repeat. Having said that, I think you have an understanding of the nature of processes, and to describe the amount of time that the CPU executes a program between context switches we have the terms process and thread.

Insert an irrelevant point: nouns refer to objective things, adjectives describe objective things.

Threads are smaller than processes, so let’s say the CPU is going to execute A program, A process, and that program is made up of A, B and C. You can see the following example to understand.

We can imagine that opening Chrome is like opening a process, and if we need to display a page the Chrome thread that parses the JS code, the thread that sends the request data, the thread that generates the bitmap data, etc., they take turns in some order to produce the page that we want to display.

The execution process is to first load the context of A, then the CPU executes part A, then B, then C, and then save the context of A. In this case, the time required to switch the thread context is much less than the time required to switch the process context, so the CPU utilization is greatly improved. Global variables can be used almost anywhere in the program and are equivalent to all threads sharing the current process context, which is the address space of the current process.

Note that there is a difference between the process thread concept described here and the process thread concept in actual code. The way of definition in a programming language is just the way of implementation of the language, which is the materialization of the concept of process thread.


What is single thread and multi-thread?

CPU execution time, from the macro and micro can be divided into process and thread, and thread can be divided into single thread and multi-thread.

  • Single-threaded: This is easy to understand, our Javascript is single-threaded, which means that the browser has only one thread working while parsing the JS code, and if it finds an error somewhere, it stops and cannot execute afterwards. This process, which cannot be executed after an error occurs, is also called blocking.
  • Multithreading: Multiple threads are executed in rotation in some order. Multithreading is well illustrated in the example above, but it is important to note that multithreaded programs look like multiple threads running at the same time, but to the CPU they are just executed in rotation in some order.

In simple terms, in a process, can only do one thing at a time is single thread, can do more than one thing is multithreading.

What are concurrency and parallelism?

  • Concurrency: Concurrency, in the operating system, refers to a period of time when several programs are running between start and finish, and all of these programs are running on the same processor, but only one program is running on the processor at any time.


We think there are multiple users access to the server, if will be dealt with in a user’s request to handle under a user’s request, it is too waste of time, so in fact we seem can access a web site at the same time, but to be access to a web server or by handling different client’s request, It just processes requests so fast that we think multiple users are visiting the site at the same time.


  • Parallelism: Multiple instructions running on different processors at the same time. For example, if we write code while listening to music, the player process and VScode process are parallel.





What is the difference between a process and a thread?

Having said the above before answering this question, we still hope to remember from the perspective of understanding, so as to give the interviewer a satisfactory answer during the interview.

  1. Both processes and threads are descriptions of the time period in which the CPU works
  2. Processes are the basic unit of resource allocation and threads are the basic unit of program execution
  3. Processes have independent address space, multi – process procedures a process crash does not affect other processes. Multithreaded programs share the address space of the current process, and the crash of one thread can cause the entire process to crash
  4. Threads are easy to communicate with each other because they share the same block of memory, as long as a pointer points to them

In fact, multithreaded programs from the macro point of view, is to let a program run when multiple functions can be executed at the same time, but in essence for the CPU is still inseparable from the context of the current thread (of course, it is easier to switch the context of the thread), in a certain order to execute different threads.

Second, what is synchronous and asynchronous?

Synchronous and asynchronous are essentially concerned with the communication mechanism of the message (synchronous communication/ Asynchronous communication).

Synchronization: If you make a call, the call has no return value until it has a result, and once it has a result, the call has a return value. That is, we need to actively wait for the result of the call.

Asynchrony: Asynchrony is the opposite. If we make a call, the call returns immediately, so no result is returned. That is, when a call is initiated, we do not get the result immediately, but wait for the caller to notify the caller via status, notification, or callback function to handle the call (much like Promsie). For example, if you call the bookstore owner and ask if they have distributed Systems, if it’s a synchronous communication system, the bookstore owner will say, wait a minute, “I’ll check”, and start checking and checking, and when you’re done (it could be 5 seconds, it could be a day) you’ll get the results. With asynchronous communication, the bookstore owner tells you let me check it out, calls you when it’s done, and hangs up. And when he does, he’ll call you. Here the boss calls back and forth by “calling back.”

What is blocking and non-blocking?

Blocking and non-blocking are concerned with the state of the program while it waits for the result of the call (the return value of the message). A blocked call means that the thread is suspended until the current call returns a result. No other calls can be made until the result is returned. A non-blocking call means that the thread can satisfy other calls without blocking until the result is returned. Here’s an example: You call the bookstore owner and ask if they have Distributed Systems, and if you block, you will “hang up” until you get a result. If you do not block, you will play on your own regardless of whether the owner tells you. Of course, take a few minutes every now and then to check in with your boss. Blocking and non-blocking here have nothing to do with synchronous asynchrony. It doesn’t matter how your boss answers you. To summarize, synchronous and asynchronous are concerned with the communication mechanism of messages, while blocking and non-blocking are concerned with the state of waiting for the result of a program call. The main concern between synchronous and asynchronous is the way the message is notified, if the call returns a value immediately, that’s synchronous, and if the call doesn’t return a value immediately but somehow notifies us when it returns a value, that’s asynchronous. Blocking and non-blocking are mainly concerned with the state of waiting for a notification. If waiting for a notification cannot do anything else, it is blocked, and waiting for a notification and doing nothing else is non-blocking.

What are the communication methods between processes? What are their advantages and disadvantages?


As mentioned above, each process has its own usersAddress spaceOther processes cannot be retrieved from the processOf a global variable or dataSo if processes want to communicate with each other, they must go through a “third party”, the kernelMemory bufferProcess A copies data into the kernel buffer, which process B must read in order to use. This completes the communication between processes. The mechanism provided by the kernel is calledInterprocess communication(The IPC, InterProcess Communication).







The pipe:

Pipes are divided into two types, one is anonymous pipe, one is named pipe. In Linux using the “|” to represent the pipeline, interested students can go to see the usage. The essence of a pipe is a kernel buffer, from which processes read and write data in a first-in-first-out manner. Processes on one end of the pipe write data sequentially to the buffer, and processes on the other end read data sequentially. We can think of it as a queue. Anonymous pipes:

  • It is half full duplex, data can only flow in one direction, if both parties need to communicate, two channels need to be established
  • It can only be used between relative processes, parent-child processes, and sibling processes
  • Data reading and writing: What one process writes to a pipe is read by a process at the other end of the pipe. The written content is added to the end of the pipe buffer each time, and the data is read from the buffer’s head each time.
  • Note that the data is still in memory at the time of communication

Named pipeline:

  • Resolved an issue where anonymous pipes could only communicate between relative processes
  • The names of the pipes are stored on system disk, and the contents are stored in the kernel

Advantages and disadvantages of pipes: The advantages are clear, simple to use in Linux systems, there are blocking mechanisms, if the data can be guaranteed to be fetched.

The disadvantage is that the kernel buffer is limited, and it is due to the blocking mechanism that process A passes data to process B so that process B can read the data before process A can return it. So it’s not suitable for frequent communication.

Signal:

Signal is a simulation of the interrupt mechanism at the software level. It is an asynchronous communication mode. Signals can interact directly between the user-space process and the kernel, and the kernel can use signals to inform the user-space process of the system events.

  • Signal is a common way of communication between processes in Linux system. It can communicate with a process at any time without knowing the state of the process
  • If the current process is not executing, the signal is stored in the kernel and passed to the process as it executes
  • If the process sets the signal to block, it is delayed until the block is cleared

That is to say, the signal is mainly transmitted directly between the process and the kernel, which can be used to control the process termination and exit. There are two main sources of signal events

  • Hardware source: User key inputCtrl+CExit, hardware exceptions such as invalid storage access, etc.
  • Software termination: terminates the process signal, other processes call the kill function, and software exceptions generate signals.

Signals in Linux are:

(1) SIGHUP: the user logs out from the terminal and all started processes receive the SIGHUP. The default processing of this signal is to terminate the process. (2) SIGINT: program termination signal. When the program is running, press Ctrl+C will generate this signal. (3) SIGQUIT: program exit signal. This signal will be generated by pressing Ctrl+\\ key during program operation. (4) SIGBUS and SIGSEGV: invalid addresses accessed by processes. (5) SIGFPE: fatal errors occur in operation, such as division by zero operation and data overflow. (6) SIGKILL: signal that the user terminates the process execution. Run the kill -9 command in the shell to send the signal. (7) SIGTERM: signal to end the process. Execute the kill process PID in shell to send the signal. (8) SIGALRM: timer signal. (9) SIGCLD: exit signal of the child process. If the parent process does not ignore or process the signal, the child process exits as a zombie process.


Message queue:

We know that there is a blocking mechanism for pipes. If one end is passing data and the process cannot return until the other end receives the data, can we have the process put the data in some memory and return immediately? The answer is the communication mode of the message queue, in which process A puts data on the message queue and returns it, and process B gets it from the message queue if it needs it later. This is kind of like a cache. Of course, this method solves the problem that the pipeline can only be one-way transmission and there is a blocking mechanism. However, if the communication data is too large, the system consumption is still serious in the case of repeated communication. Note: Anonymous pipes exist in memory files, named pipes exist in system disk, and message queues exist in the kernel.

Shared memory:

Since message queues are still inefficient for copying large amounts of data, wouldn’t this problem be solved if two processes could share the same memory? In fact, the system is not allocated to the process physical memory, but address space, we let multiple processes share the same address space can achieve the fastest speed of communication between processes, but each process still has its own independent address space, only part of the shared.

Semaphore:

We know that JS is designed to be single threaded essentially so that if there are multiple threads modifying the DOM at the same time, what is the state of the DOM in the page? The purpose of semaphores is essentially to solve the problem that if multiple threads share memory, one thread can use the memory while the other threads are not using it, preventing simultaneous modification. The semaphore is essentially A counter, and the principle is that if the initial semaphore is 1, if the process A accesses memory and sets the semaphore to 0, it will stop accessing other memory when it sees the semaphore is 0.

The Socket:


If the communication between the processes mentioned above is in the same computer, thenThe Socket SocketIt provides a way for processes to communicate between different computers.





 





We can see that it is between the application layer and the transport layer, the bridge between the application layer and the transport layer.





This is almost all. Although there are only a few articles, I still checked many articles, hoping to summarize them in a simple and easy way. I do not want to remember many complicated ones, because this part is rarely used in daily life.



Please tell me about the disk scheduling (seek) algorithm.

Let’s take a look at the disk structure:







A common mechanical disk is the one on the left of the image above. The circular part in the middle is the disk. There are usually multiple disks, each with its own head. The diagram on the right shows the structure of a disk. Each layer on the disk is divided into multiple tracks. Each track is divided into multiple sectors512Bytes. Then, multiple tracks with the same number form a cylinder, called a disk cylinder, as shown in the middle of the image above.



The purpose of the disk scheduling algorithm is simple, to improve the disk access performance, generally by optimizing the disk access request order.



Seek time is the most time-consuming part of disk access. If the request sequence is properly optimized, unnecessary seek time can be saved and disk access performance can be improved.



Next, several seek algorithms will be introduced:

  • First come, first served algorithm
  • Shortest seek time first algorithm
  • Scanning algorithm
  • Cyclic scanning algorithm
  • LOOK and C-look algorithms





First come first served algorithm FCFS:










Scheduling is based on the order in which processes access disks.



advantages: Simple and fair The requests of each process are processed in turn, and the requests of any process are not satisfied for a long time.



disadvantages: Because this algorithm does not optimize the seek, the average seek time is long.

Shortest time first algorithm SSTF:



Process requests on the track closest to the current head are processed first.



advantages: Is better than the first come, first served algorithm in performance, can make each seek the shortest time.



disadvantages: may occur in the case of dynamic requestshunger, causing the magnetic head to move in a small area all the time, and some processes’ requests can never be processed.



SCAN algorithm (elevator algorithm)



Because the shortest time priority algorithm can lead to hunger, which is some magnetic circuit may not access, lead to head always move in a small range, so we let the head always move on until you reach the direction along a direction the last track to change direction, because such seek process is similar to the elevator, so also known as the elevator algorithm.



advantages: The performance of this path seeking algorithm is better, and there is no hunger problem.



disadvantages: The middle track will be more expensive, and the response frequency of the middle track will be higher, that is, the response frequency of each track will not be uniform.



Cyclic scanning algorithm CSCAN:

Because scanning algorithm can result in the processing of each track response frequency is not uniform, so we have the circular scanning algorithm, let the head has been moving in one direction, until all the track response in this direction is processed to this direction on the last track, we quickly let head movement to the other side of the edge, not in the moving process of the request of the other, Then it moves in the same direction again. That is, it only resolves requests in one direction.

Cyclic scanning algorithm solves the problem that the response of scanning algorithm to each track is not uniform, and it is a perfect algorithm.

LOOK with C – LOOK:


Look:





LOOK and C-look are aimed at the problem that the scanning algorithm and cyclic scanning need to reach the last track in a certain direction every time before returning. After optimization, the direction can be changed as long as the last request in a certain direction is processed, without reaching the last track.





C – LOOK:







Talk about your understanding of deadlocks

Deadlock definitions:

A set of processes is deadlocked if each process is waiting for events (resources) that can be raised by other processes in the set. Is simple to understand if need access to A resource in the operation of A process, but A resource is occupied by process B, then A process can’t continue to perform and A process has access to resources B cannot release, has been stuck in there, after A while B process also need to get B resources due to A process in which both are unable to continue to run, This is where a deadlock occurs. So the essence of a deadlock is a situation in which processes cannot continue to run due to resource contention (also understood as the order in which resources are acquired). Note: The resources mentioned above can be many things such as locks, network connections, notification events, disks, etc.

The necessary conditions for a deadlock to occur:

  • Mutually exclusive condition: A resource can be occupied by only one process at a time. When the resource is occupied, other processes cannot use it
  • Request and hold conditions: When a process needs to obtain resources and is occupied by another process, the process remains in the request state and the occupied resources cannot be released
  • Inalienable condition: When a process uses a resource, it cannot be used by other processes until the resource is released
  • Loop waiting conditions: in a process group where a deadlock occurs, one process will occupy the resources needed by another process to form a waiting loop


How to prevent and avoid deadlocks:

1. Reasonable planning of resource use Sequence Since deadlocks are mostly caused by unreasonable resource allocation time, we should formulate a more reasonable resource use sequence to avoid the occurrence of deadlocks as much as possible. A need to use resources such as the original process and then use the resource b, and b process need to use resources reuse resources c b first, then the deadlock is likely to happen, we can let the process b process using the resource b c after using the release to use resources, such as much as possible to avoid the occurrence of A deadlock. This is just an example of how you can avoid deadlocks by allocating resources wisely. 2. Set the maximum resource Usage duration When a process requests to use a resource that is occupied by another process, the process keeps the resource occupied and waits. As a result, other processes may not be able to use the resource, resulting in deadlock. Since deadlocks can be caused by a resource being held by a process for a long time, we set a maximum time for the resource to be held and release it when the time is exceeded. 3. Allocate resources required in advance When a process is running again to obtain the needed resources, if the resource is under a state of being occupied by other processes that are likely to cause a deadlock situation, so when the process run before we can be assigned to the process ahead of all the resources needed to run time, it also can be as much as possible to avoid deadlock. However, this can lead to lower utilization of the resource, and some resources are not always needed while the process is running. 4. Banker algorithm Bankers are good at calculating 🤣, so the banker algorithm is to calculate the resources currently being used by all processes, the resources to be used, and the remaining available resources to analyze whether the current state is safe, to ensure sufficient resources will not occur deadlock.

Detect deadlock:

We can use some algorithms to periodically detect whether a deadlock occurs in the system, such as detecting the current utilization of memory resources.

Undeadlock:

When a deadlock is detected we need to resolve the deadlock, the most direct way is to restart the computer, but as programmers we should be more professional.

  • Terminate the associated process: The process that detects a life-or-death lock cancels or suspends it, forcing it to free memory
  • Depriving resources: Forcibly depriving a process of resources to release deadlocks
  • Process rollback: To roll back a deadlocked process to where it was before the problem occurred, but this is more difficult to do








Vii. Reference:

Zhihu Xiaolin Coding big factory interview question “scheduling algorithm”, 20 pictures won