Context switch
The CPU uses the time slice allocation algorithm to execute tasks in a cycle. After the current task executes a time slice, it switches to the next task. However, the state of the previous task is saved before switching, so that the state of the task can be loaded again when switching back to the task. So the process from saving to reloading a task is a context switch.
Where are the context-switching performance costs?
Too much context switching can cause the CPU to act like a porter, rushing directly through registers and runqueues, and spending more time switching threads than actually working threads. Direct costs include CPU registers that need to be saved and loaded, and system scheduler code that needs to be executed. Indirect consumption is the sharing of data between multi-core caches.
How do I view the number and duration of context switches
- The duration of context switching can be measured using Lmbench.
- You can measure the number of context switches using vmstat.
What are the causes of context switching?
- For preemptive operating systems, there are generally several:
- After the time slice of the current task is used up, the system CPU normally schedules the next task.
- If the current task encounters AN I/O block, the scheduling thread suspends the task and continues to the next task.
- Multiple tasks preempt lock resources, the current task is not robbed, the scheduler suspends, continue the next task;
- User code suspends the current task, freeing up CPU time.
- Hardware interrupt;
How do I reduce context switches
Context switching can be reduced by using lock-free concurrent programming, CAS algorithms, using minimal threads, and using coroutines. ❑
- Lockless concurrent programming. When multiple threads compete for locks, context switch will be caused. Therefore, when multiple threads process data, some methods can be used to avoid using locks. For example, the ID of data can be modelled into segments according to the Hash algorithm, and different threads process different segments of data.
- CAS algorithm. Java’s Atomic package uses the CAS algorithm to update data without locking.
- Use minimum threads. Avoid creating unnecessary threads, such as a few tasks, but many threads are created to process them, which results in a large number of threads waiting.
- Coroutine: Performs scheduling of multiple tasks in a single thread and maintains switching between tasks in a single thread.