directory

  • Process versus thread
  • multithreading
    • Synchronization and mutual exclusion
    • Producers and consumers
    • The philosopher’s Dining problem
    • Reader writer questions
  • Interprocess communication
    • The pipe
    • The message queue
    • The Shared memory
    • A semaphore
    • signal
    • Socket
  • The lock
    • Mutexes and spinlocks
    • Read-write lock
    • Optimistic and pessimistic locks
    • A deadlock

Process versus thread

  • Process is the unit of resource allocation (including memory, open files, etc.), thread is the unit of CPU scheduling;
  • Processes have a complete resource platform, while threads have exclusive access to essential resources such as registers and stacks.
  • The thread also has three basic states: ready, blocked and executed, and also has the transition relationship between the states.
  • Threads can reduce the time and space overhead of concurrent execution.

Threads can reduce overhead compared to processes, as shown in:

  • The creation time of a thread is faster than that of a process. During the creation process, a process also needs resource management information, such as memory management information and file management information. However, the thread does not involve such resource management information but shares it.
  • Threads terminate faster than processes because threads release fewer resources than processes.
  • Thread switching within the same process is faster than process switching because threads have the same address space (virtual memory sharing), which means that threads in the same process all have the same page table, so there is no need to switch the page table when switching. For switching between processes, the page table must be switched off during switching, and the cost of the page table switching process is relatively large.
  • Since threads of the same process share memory and file resources, data does not need to pass through the kernel when passing between threads, making data interaction between threads more efficient

multithreading

Synchronization and mutual exclusion

If multiple threads compete for shared resources, if effective measures are not taken, the shared data will be chaotic.

Since this code can cause a race state when multiple threads perform operations on shared variables, we call this code a critical section. It is a code fragment that accesses shared resources and must not be executed by multiple threads at the same time.

We want this code to be mutually exclusive, which means that while one thread is executing in a critical section, other threads should be blocked from entering the critical section

The so-called synchronization means that concurrent processes/threads may need to wait and exchange messages with each other at some key points. Such mutually restricted waiting and exchange information is called process/thread synchronization.

  • The lock: lock and unlock operation;
  • A semaphore: P, V operation;

Both are convenient for process/thread exclusion, and semaphores are a little more powerful than locks, which also allow easy process/thread synchronization.

Locks are classified into wait-free locks and spin locks:

When the lock is not acquired, the thread loops through wile and does nothing, so it is called a “busy wait lock”, also known as a spin lock.

No wait lock magic is when the lock is not available, no spin.

Since you don’t want to spin, when the lock is not acquired, you put the current thread into the lock wait queue, and then execute the scheduler, leaving the CPU to other threads.

The semaphore represents the number of resources, and the corresponding variable is an integer (SEM) variable

  • P operationWill:sem1After subtraction, ifsem < 0, the process/thread enters the blocking wait, otherwise continue, indicating that the P operation may block;
  • V operationWill:sem1After adding, ifsem <= 0To wake up a waiting process/thread, indicating that the V operation will not block;



For two concurrent threads, the mutex values are 1, 0, and -1, respectively:

If the mutex is 1, no thread enters the critical region. If the mutex is 0, a thread enters the critical section. If the mutex is -1, one thread enters the critical section and another waits to enter. By means of mutex, only one thread can be executed at any time in the critical section, thus achieving the effect of mutex.

Producers and consumers

  • After the producer generates the data, it puts it in a buffer.
  • The consumer fetches data from the buffer for processing;
  • Only one producer or consumer can access the buffer at any time;
  • Only one thread can operate the buffer at any time, indicating that the operation buffer is critical code, which needs to be mutually exclusive;
  • When the buffer is empty, the consumer must wait for the producer to generate data; When the buffer is full, the producer must wait for the consumer to fetch the data. Producers and consumers need to synchronize.

We need three semaphores:

Mutex: Used for mutex access buffers, initialized to 1

Resource semaphore fullBuffers: Used by consumers to ask if there is data in the buffer, which will be read, with an initial value of 0 (indicating that the buffer is initially empty);

Resource semaphore emptyBuffers: Used by producers to ask if the buffer is empty and generate data with an initial value of n (buffer size);

The philosopher’s Dining problem

  • 5A big brother philosopher, doing nothing, eating noodles around a round table;
  • As it happens, this table only has5A fork, one between two philosophers;
  • Philosophers gather together to think, and when they are hungry, they want to eat.
  • Paradoxically, these philosophers required two forks to eat noodles, meaning they had to have both left and right forks to eat;
  • When they’re done, they put the two forks back and continue thinking.

So the question is, how do you ensure that the sages move in an orderly fashion so that no one ever reaches for a fork?

The even-numbered philosophers were asked to take the left fork first and then the right fork, and the odd-numbered philosophers were asked to take the right fork first and then the left fork.

The above procedure, when operating P, picks up the left and right forks in different order, depending on the philosopher’s number. In addition, V operations do not need to branch because V operations do not block.

Reader writer questions

The reader reads the data and does not modify it, while the writer can either read or modify the data.

Instead of talking about read-first over write-first locks, talk about fair read-write locks.

Fair strategy:

At the beginning, some readers read data, and they all enter the reader queue. At this time, a writer comes and performs P(falg) operation, so that the subsequent readers block on the flag and cannot enter the reader queue, which makes the reader queue gradually empty, that is, rCount decreases to 0.

The writer cannot start writing immediately (because the reader queue is not empty at this time) and blocks on the semaphore wDataMutex. After all the readers in the reader queue finish reading, the last reader process executes V(wDataMutex) to wake up the previous writer, and the writer continues to write. When all the readers in the reader queue finish reading, the last reader process executes V(wDataMutex) to wake up the writer just now, and the writer continues to write.

Interprocess communication

The user address space of each process is independent and generally not mutually accessible, but the kernel space is shared by each process, so communication between processes must be through the kernel.

The pipe

Pipes transfer data in one direction, so if we want to communicate with each other, we need to create two pipes.

Anonymous pipes. Destroy them when you’re done.

Named pipes, also known as FIFOS, because data is transferred first in, first out

The command can exit normally only after the data in the pipe is read.

Pipeline is an inefficient way of communication and is not suitable for frequent exchange of data between processes. The benefit, of course, is simplicity, and it’s easy to know that the data in the pipe has been read by another process.

In the case of an anonymous pipe, it communicates to processes that have parent-child relationships. Since the pipe has no entity (i.e., no pipe file), the parent fd file descriptor can only be fork copied for communication purposes.

For named pipes, it can communicate between unrelated processes. Because of the command pipe, a device file of type pipe is created ahead of time. Processes can communicate with each other using this device file.

The message queue

Process A sends A message to process B. Process A puts the data in the corresponding message queue and then returns the message. Process B can read the data when it needs it.

Message queue is stored in the kernel of messages in the list, when sending data, data will be divided into an independent unit, that is the message body (block), the body is a user-defined data types, message sender and receiver to agree the good news the data type of the body, so every body is a fixed size of storage block, Unlike pipes, which are unformatted byte stream data. If the process reads the message body from the message queue, the kernel deletes the message body. Message queues are not suitable for the transfer of large data because there is a maximum length limit for each message body in the kernel, as well as an upper limit on the total length of all message bodies contained in all queues.

Message queue communication in the process, exists between user mode and kernel mode data copy overhead, because the process of writing data into the kernel of the message queue, copies the data from the user mode to kernel mode will happen, in the same way the other process reads the message data in the kernel, copy the data from the kernel state will happen to the user mode process.

The Shared memory

The mechanism of shared memory is to take a virtual address space and map it to the same physical memory.

In this way, what one process writes can be immediately seen by another process without having to copy it back and forth, greatly increasing the speed of inter-process communication.

A semaphore

To prevent data confusion caused by multiple processes competing for shared resources, a protection mechanism is needed so that shared resources can only be accessed by one process at any time. As it happens, the semaphore implements this protection mechanism.

A semaphore is actually an integer counter that is used for mutual exclusion and synchronization between processes rather than for caching data communicated between processes.

There are two atomic operations that control semaphores:

One is the P operation, which subtracts the semaphore by -1. If the semaphore is less than 0, the resource is occupied and the process needs to block and wait. If the semaphore >= 0 after subtraction indicates that resources are available and the process can continue normally. The other operation is the V operation, which adds 1 to the semaphore. If the semaphore <= 0, it indicates that there is a blocked process, so it will wake up the process. If the semaphore > 0 after the sum, it indicates that no process is currently blocked;

The P operation is used before entering the shared resource, and the V operation is used after leaving the shared resource. These two operations must be paired.

Both processes are mutually exclusive to access shared memory, and we can initialize the semaphore to 1.

Process A performs operation P before accessing the shared memory. The initial semaphore value is 1. After process A performs operation P, the semaphore value changes to 0, indicating that shared resources are available and process A can access the shared memory. If, at this point, process B also wants to access the shared memory, and performs P, the signal quantity changes to -1, which means that the critical resource is occupied, so process B is blocked. Until process A finishes accessing the shared memory, operation V will be performed to restore the semaphore to 0, and then the blocking thread B will be awakened so that process B can access the shared memory. Finally, after the shared memory is accessed, operation V will be performed to restore the semaphore to its initial value 1. When the signal is initialized to 1, it represents a mutex, which ensures that only one process is accessing the shared memory at any time. This protects the shared memory.

Process A is responsible for producing data, while process B is responsible for reading data. The two processes cooperate and depend on each other. Process A must produce data before process B can read data, so the execution is sequential.

Semaphore to achieve multi-process synchronization, we can initialize the semaphore to 0.

If process B executes before process A, the initial semaphore value is 0 when P is executed, so the semaphore will change to -1, indicating that process A has not produced data, so process B will block and wait. Then, when process A finishes producing data and performs operation V, the signal quantity changes to 0 and process B, which is blocked at operation P, wakes up. Finally, when process B wakes up, it means that process A has produced data and process B can read it normally.

signal

For abnormal working mode, the process needs to be notified by means of “signals”.

Signals are the only asynchronous communication mechanism in the interprocess communication mechanism, because signals can be sent to a process at any time, and once a signal is generated, we have the following ways of processing the signal by the user process.

Signal processing by user process:

1. Perform default operations.

2. Pick up signals. Defines a signal handler function for a signal. When the signal occurs, we execute the corresponding signal processing function

3. Ignore signals

Socket

Socket communication is needed to communicate with processes on different hosts across the network.

The lock

Mutexes and spinlocks

The purpose of locking is to ensure that only one thread can access the shared resource at any time. In this way, the problem of shared data being corrupted caused by multiple threads can be avoided.

  • If the mutex fails to lock, the thread releases the CPU to another thread.
  • If the spin lock fails, the thread waits until it gets the lock.

The mutex is an “exclusive lock”. For example, when thread A successfully locks the mutex, it is already owned by thread A. As long as thread A does not release the lock, thread B will fail to lock the mutex and release the CPU to another thread. The code that the natural thread B locks will block.

When the mutex fails to lock, it will fall from user state to kernel state, and let the kernel help us switch threads. Although it simplifies the difficulty of using the lock, there is a certain performance cost. There is the cost of two thread context switches.

If you can be sure that the code being locked has a short execution time, you should not use mutexes and should use spin locks instead, otherwise use mutexes.

Spin-locks are faster and less expensive than mutex because they are locked and unlocked in the “user state” without active thread context switching.

With spin locks, when multiple threads compete for the lock, the thread that failed to lock will “wait” until it gets the lock.

Spin-locks are the simplest type of lock, and keep spinning, using CPU cycles, until the lock is available. Note that on a single-core CPU, you need preemptive schedulers (that is, constantly interrupting one thread through the clock to run other threads). Otherwise, spin locks cannot be used on a single CPU, because a spinning thread never gives up the CPU.

Read-write lock

Read/write locks apply to scenarios where read and write operations can be clearly distinguished.

Read/write lock Read/write lock Read/write lock Read/write lock Read/write lock read/write lock

Read/write locks work like this:

When a “write lock” is not held by a thread, multiple threads can concurrently hold a read lock. However, once the “write lock” is held by the thread, the read-lock operation of the reader thread will be blocked, and the read-lock operation of other writer threads will be blocked. Read-write locks can be used to their advantage in scenarios where more is read and less is written.

Read/write locks are classified into read – first locks and write – first locks.

Read priority lock

When the reader thread A holds the read lock first, the writer thread B will be blocked when acquiring the write lock. During the blocking process, the subsequent reader thread C can still obtain the read lock. Finally, the writer thread B can obtain the write lock only after the reader thread A and C release the read lock.

“Write first lock”

When read the threads to hold the first read lock, A writer thread B at the time of access to write locks, will be blocked, and in the process of blocking, subsequent to read threads to obtain C lock when they fail, then read the thread will be blocked in C read lock operation, such as long as read the thread to release A read lock, write thread B read lock can get success.

“Fair Read/write Lock”

A fairly simple way to do this is to queue up the thread that acquires the lock. Both the writer thread and the reader thread are locked on a first-in, first-out basis, so that the reader thread can still be concurrent and there is no “hunger” phenomenon.

Optimistic and pessimistic locks

Mutexes, spinlocks, and read-write locks are pessimistic locks.

Pessimistic lock is pessimistic. It believes that the probability of multiple threads modifying shared resources at the same time is high, so it is easy to conflict. Therefore, lock the shared resources before accessing them.

Conversely, if the probability of multiple threads simultaneously modifying a shared resource is low, optimistic locking can be used.

Optimistic locking work more optimistic, it is assumed that the conflict probability is very low and the way it works is that: first, modify the Shared resources, to verify this time there is no conflict, if there is no other threads in the modify resources, so operation is complete, if it is found that there are other threads have modified the resources, give up the operation.

Optimistic locking does not lock at all, so it is also called lockless programming.

Optimistic locking is only considered in scenarios where the probability of conflict is very low and the cost of locking is very high.

A deadlock

Thread A holds resource A and is waiting to acquire resource B. A deadlock occurs when thread B holds resource B but is waiting to acquire resource A.

Generation conditions:

  • Mutual exclusion: Multiple threads mutually exclusive access to a desired resource (producer and consumer compete for a mutex lock)
  • Non-preemption: threads cannot be preempted after competing for resources (consumers grab mutex, forcing producers to wait)
  • Hold and wait: a thread holds a resource in a race while waiting for another resource. (After the consumer acquires the mutex lock, it waits for the producer’s sem_POST (& NOT_EMPTY))
  • Loop waiting: a loop between threads in which each thread is responsible for controlling a resource that the next thread claims.

Method 1. Break the mutex condition

If we can make a separate copy of resource AB for each thread before the two threads run, we can effectively avoid deadlocks.

Method 2. Destroy the loop waiting condition

You can avoid deadlocks by forcing any thread to fetch resources in the order of “fetch A first and then B”.

Method 3. Destruction of non-deprivation conditions

Inalienable conditions are the result of threads owning resources unless they return them themselves. You can solve this problem by setting a maximum occupancy threshold. If the next step is not reached after 10 minutes, the existing resources are returned.

Method 4. Break request and hold conditions

We can effectively solve this problem by stating that in any case, a thread that acquires a resource must return the resource before requesting another.